SENTIMENT ANALYSIS OF RESPONSES FROM BENEFICIARIES TO LIVELIHOOD SUBPROJECTS IN ZAMBIA¶
Author: Nathan Namatama
Institution: Leibniz Institute of Ecological Urban and Regional Development and Technical University of Dresden
Year: 2025
Related Publication: The effects and impacts of livelihood activities and unplanned human settlement growth on greenspace and wetland landscapes in Zambia: A case of the three areas of the Pilot Programme for Climate Resilience (PPCR)
Purpose of the Analysis¶
The analysis is conducted on the data collection that was conducted in Zambia from 23rd July 2024 to 22nd September 2024. The primary data was collected using ArcGIS Survey123 application in an offline mode mostly in certain areas that did not have access to internet while those that ahd internet an online mode was used. The respondents were interviewed using a semi structured question and the responses were recorede in the application as they were responding to the questions.
The analysis is done in the framework of systems thinking of looking at deep leverage points in the governance for transformation of Social Ecologocal Systems so as to attain sustainable transfomation. The analysis is done in both qualitative and quantitatives (descriptions) to produce graphs and tables that are visualised within the jupyterlab notebook.
1. Importing Libraries¶
The liberarries that are needed for conducting the analysis are installed and downloaded. They are as follows:
- NLTK: For reading text and understanding it in a way that a human can do it
- Re: For text manipulation and pattern matching
- Pandas: For converting tables into a format understandable by the computer as well as visualisation
- Numpy: For conducting statistical culculations
- Matplotlib: For visualisation
- Seabron: For visualisation
- Io: For reading/writing binary and text data efficiently
- Csv: For reading the csv files
- Unicodeddata: For interacting with and analyzing Unicode characters
- String: For language analysis, user input, or file processing
- Plotly: For visualisation
- Plot_Likert: For visualisation of likert scales
- %matplotlib line: For visualisation within the jupyterlab notebook
- Nbconvert: For converting to HTML format
- WordCloud: For creating a word cloud
- Bigrams: For making words into pairs
- Trigrams: For making three words sequencies
- GridSpec For ploting a graph in a specific grid
import nltk
import re
from collections import Counter
from nltk.probability import FreqDist
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk import sent_tokenize, word_tokenize, pos_tag
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import seaborn as sns
import io
from io import StringIO
import csv
import unicodedata
import string
import plotly
import plotly.express as px
import plot_likert
from sklearn.model_selection import train_test_split
%matplotlib inline
import nbconvert
from nbconvert import HTMLExporter
import nbformat
from wordcloud import WordCloud
from nltk import bigrams
from nltk import trigrams
import subprocess
import shutil
import pypandoc
nltk.download('punkt_tab')
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
[nltk_data] Downloading package punkt_tab to [nltk_data] C:\Users\nazin\AppData\Roaming\nltk_data... [nltk_data] Package punkt_tab is already up-to-date! [nltk_data] Downloading package punkt to [nltk_data] C:\Users\nazin\AppData\Roaming\nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package stopwords to [nltk_data] C:\Users\nazin\AppData\Roaming\nltk_data... [nltk_data] Package stopwords is already up-to-date! [nltk_data] Downloading package wordnet to [nltk_data] C:\Users\nazin\AppData\Roaming\nltk_data... [nltk_data] Package wordnet is already up-to-date!
True
2. Reading the Excel Table¶
The csv file is converted to a padas dataframe
The dataframe table is displyed with ALL columns and rows with cutting any
df = pd.read_csv(r"D:\DataAnalysis\Social_Survey_Questionnaire_for_Beneficiaries_0.csv")
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
type(df)
pandas.core.frame.DataFrame
#df
df.shape
(150, 120)
3. Missing values¶
The Pandas Dataframe is checked for mising values
#df.isnull().sum()
3.1 Dropping all the Missing Values¶
#df.isnull().sum().sort_values(ascending=False)
#df.dropna(inplace=True)
4. Deleting Columnns¶
There are 120 columns and 85 columns were removed from the pandas dataframe so as to focus on specific columns that have questtions in regards to livelihood and landscape transfromation
df1=df.drop(df.columns[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,116,117,118,119]], axis = 1)
#df1
5. Renaming Columns¶
The headings of the columns were renamed for easy inclusion in the code
df1.columns
Index(['ObjectID', '6. Ward Name', '7. Sub Project Name',
'8. Type of Livelihood', '9. Size (Lima)', '12. Name of main project',
'13. Do you represent other beneficiaries?',
'14. How many beneficiaries do you represent?',
'45. Are there cultural practices that hinder the sustainable management of Forests, Wetlands, National Game Parks and Biodiversity?',
'46. What reasons can you give for your answer above?',
'47. Do you think some cultural practices can be changed?',
'48. Do you consider cultural aspects when formulating the livelihood projects?',
'49. What reasons can you give for your answer above?',
'50. What is the main purpose of landscapes (Forests, Water Bodies, Wetlands⦠etc.) in your livelihood?',
'51. Do you think there is need to measure indicators when managing landscapes?',
'52. How is your connection to nature like?',
'53. What reasons can you give for your answer above?',
'54. How long have you worked on this livelihood project?',
'55. Does your livelihood depend on the natural resources for a living?',
'56. Do you consider changing your livelihood strategy in future?',
'57. Do you think it is easier to change your livelihood practices?',
'58. Are you able to give reasons for your answer above in your ability to change your livelihood practices?',
'59. Have the ecosystem services reduced from the inception of the project in your ward?',
'60. Has the deforestation increased in the ward?',
'61. Do you think protected areas are a hindrance to your livelihoods?',
'62. Are there new livelihood projects that you think of that have never been implemented?',
'63. Do you think the livelihood subprojects are contributing to the sustainability of landscapes?',
'64. Which livelihood is a major contributor to landscape transformation?',
'65. What reasons can you give for your answer above?',
'66. Which type of landscape do you depend on much for a livelihood?',
'67. What reasons can you give for your answer above?', 'Specify:.4',
'Specify:.5', 'Specify:.6', 'Specify:.7'],
dtype='object')
df2=df1.rename(columns= {'1. Do you agree to take part in the above study?': 'Part_study',
'2. Do you know that your participation is voluntary and you are free to withdraw anytime?':'Participation_Voluntary',
'3. Do you give permission to the data that emerges to be used by the researchers only in an anonymised form?': 'Anonymised_Form',
'5. Date': 'Date',
'6. Ward Name': 'Ward_Name',
'7. Sub Project Name': 'Sub_Project_name',
'8. Type of Livelihood': 'Livelihood',
'9. Size (Lima)': 'Size',
'12. Name of main project': 'Name_Main_Project',
'13. Do you represent other beneficiaries?': 'Representing_Others',
'14. How many beneficiaries do you represent?': 'Number_Beneficiaries',
'45. Are there cultural practices that hinder the sustainable management of Forests, Wetlands, National Game Parks and Biodiversity?': 'Cultural_Practices_Hinder',
'46. What reasons can you give for your answer above?': 'Cultural_Practices_Hinder_Reason',
'47. Do you think some cultural practices can be changed?': 'Cultural_Practices_Changed',
'48. Do you consider cultural aspects when formulating the livelihood projects?': 'Cultural_Aspects_Considered',
'49. What reasons can you give for your answer above?': 'Cultural_Aspects_Considered_Reasons',
'50. What is the main purpose of landscapes (Forests, Water Bodies, Wetlands⦠etc.) in your livelihood?': 'Purpose_Landscape',
'51. Do you think there is need to measure indicators when managing landscapes?': 'Measure_Indicators',
'52. How is your connection to nature like?': 'Connection_Nature',
'53. What reasons can you give for your answer above?': 'Connection_Nature_Reasons',
'54. How long have you worked on this livelihood project?': 'Range_Years',
'55. Does your livelihood depend on the natural resources for a living?': 'Livilihood_Depenedent',
'56. Do you consider changing your livelihood strategy in future?': 'Change_Livelihood',
'57. Do you think it is easier to change your livelihood practices?': 'Change_Livelihood_Easy',
'58. Are you able to give reasons for your answer above in your ability to change your livelihood practices?': 'Change_Livelihood_Easy_Reasons',
'59. Have the ecosystem services reduced from the inception of the project in your ward?': 'Ecosystem_Services_Reduced',
'60. Has the deforestation increased in the ward?': 'Deforestaion_Increased',
'61. Do you think protected areas are a hindrance to your livelihoods?': 'Protected_Areas_Hinderarnce_Livelihood',
'62. Are there new livelihood projects that you think of that have never been implemented?': 'New_Livelihood_Projects',
'63. Do you think the livelihood subprojects are contributing to the sustainability of landscapes?': 'Subprojects_Sustainability_Contribution',
'64. Which livelihood is a major contributor to landscape transformation?': 'Contributor_Landscape_Transformation',
'65. What reasons can you give for your answer above?': 'Contributor_Landscape_Transformation_Reasons',
'66. Which type of landscape do you depend on much for a livelihood?': 'Landscape_Depended_Livelihood',
'67. What reasons can you give for your answer above?': 'Landscape_Depeneded_Livelihood_Reasons',
'Specify:.4': 'Purpose_Landscape_Specific',
'Specify:.5': 'Connection_Nature_Specific',
'Specify:.6': 'Contributor_Landscape_Transformation_Specific',
'Specify:.7': 'Landscape_Depended_Livelihood_Specific'})
#df2
6. Selection of Likert Scale Columns¶
The columns that had likert scale responses were group in a single dataframe and they are 12 in number the columns were looking at the following questions as numbered in the questionaire:
- '45. Are there cultural practices that hinder the sustainable management of Forests, Wetlands, National Game Parks and Biodiversity?
- '47. Do you think some cultural practices can be changed?',
- '48. Do you consider cultural aspects when formulating the livelihood projects?',
- '51. Do you think there is need to measure indicators when managing landscapes?',
- '55. Does your livelihood depend on the natural resources for a living?',
- '56. Do you consider changing your livelihood strategy in future?',
- '57. Do you think it is easier to change your livelihood practices?',
- '59. Have the ecosystem services reduced from the inception of the project in your ward?',
- '60. Has the deforestation increased in the ward?',
- '61. Do you think protected areas are a hindrance to your livelihoods?',
- '62. Are there new livelihood projects that you think of that have never been implemented?',
- '63. Do you think the livelihood subprojects are contributing to the sustainability of landscapes?'
df3=df2.drop(df2.columns[[0,1,2,3,4,5,6,7,9,12,13,15,16,17,21,27,28,29,30,31,32,33,34]], axis = 1)
#df3
7. The Number of Responses¶
The number of responses were counted for each column that had a likert scale and the NaN indicates no response
all_counts = df3.apply(pd.Series.value_counts, dropna=False)
all_counts
| Cultural_Practices_Hinder | Cultural_Practices_Changed | Cultural_Aspects_Considered | Measure_Indicators | Livilihood_Depenedent | Change_Livelihood | Change_Livelihood_Easy | Ecosystem_Services_Reduced | Deforestaion_Increased | Protected_Areas_Hinderarnce_Livelihood | New_Livelihood_Projects | Subprojects_Sustainability_Contribution | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Agree_Likert | 20 | 29 | 19 | 28 | 35 | 39 | 46 | 39 | 16 | 14.0 | 58 | 37 |
| Disagree_Likert | 22 | 22 | 20 | 4 | 25 | 37 | 24 | 13 | 32 | 19.0 | 15 | 4 |
| Strongly_Agree_Likert | 21 | 29 | 22 | 81 | 44 | 38 | 49 | 72 | 72 | 15.0 | 36 | 81 |
| Strongly_Disagree_Likert | 70 | 41 | 66 | 22 | 32 | 20 | 18 | 16 | 17 | 83.0 | 19 | 15 |
| Undecided_Likert | 4 | 9 | 7 | 7 | 4 | 5 | 4 | 3 | 1 | NaN | 10 | 3 |
| NaN | 13 | 20 | 16 | 8 | 10 | 11 | 9 | 7 | 12 | 19.0 | 12 | 10 |
7.1 Transposing the Dataframe Table¶
The columns and the rows were interchanged so that they can be easily presented on the graph
all_counts1 = all_counts.head().T
all_counts1
| Agree_Likert | Disagree_Likert | Strongly_Agree_Likert | Strongly_Disagree_Likert | Undecided_Likert | |
|---|---|---|---|---|---|
| Cultural_Practices_Hinder | 20.0 | 22.0 | 21.0 | 70.0 | 4.0 |
| Cultural_Practices_Changed | 29.0 | 22.0 | 29.0 | 41.0 | 9.0 |
| Cultural_Aspects_Considered | 19.0 | 20.0 | 22.0 | 66.0 | 7.0 |
| Measure_Indicators | 28.0 | 4.0 | 81.0 | 22.0 | 7.0 |
| Livilihood_Depenedent | 35.0 | 25.0 | 44.0 | 32.0 | 4.0 |
| Change_Livelihood | 39.0 | 37.0 | 38.0 | 20.0 | 5.0 |
| Change_Livelihood_Easy | 46.0 | 24.0 | 49.0 | 18.0 | 4.0 |
| Ecosystem_Services_Reduced | 39.0 | 13.0 | 72.0 | 16.0 | 3.0 |
| Deforestaion_Increased | 16.0 | 32.0 | 72.0 | 17.0 | 1.0 |
| Protected_Areas_Hinderarnce_Livelihood | 14.0 | 19.0 | 15.0 | 83.0 | NaN |
| New_Livelihood_Projects | 58.0 | 15.0 | 36.0 | 19.0 | 10.0 |
| Subprojects_Sustainability_Contribution | 37.0 | 4.0 | 81.0 | 15.0 | 3.0 |
7.2 Changing the Order of Columns¶
The order of columns was changed so that they can be easily analysed
all_counts2 = all_counts1.iloc[:, [3, 1, 4, 0, 2]]
all_counts2
| Strongly_Disagree_Likert | Disagree_Likert | Undecided_Likert | Agree_Likert | Strongly_Agree_Likert | |
|---|---|---|---|---|---|
| Cultural_Practices_Hinder | 70.0 | 22.0 | 4.0 | 20.0 | 21.0 |
| Cultural_Practices_Changed | 41.0 | 22.0 | 9.0 | 29.0 | 29.0 |
| Cultural_Aspects_Considered | 66.0 | 20.0 | 7.0 | 19.0 | 22.0 |
| Measure_Indicators | 22.0 | 4.0 | 7.0 | 28.0 | 81.0 |
| Livilihood_Depenedent | 32.0 | 25.0 | 4.0 | 35.0 | 44.0 |
| Change_Livelihood | 20.0 | 37.0 | 5.0 | 39.0 | 38.0 |
| Change_Livelihood_Easy | 18.0 | 24.0 | 4.0 | 46.0 | 49.0 |
| Ecosystem_Services_Reduced | 16.0 | 13.0 | 3.0 | 39.0 | 72.0 |
| Deforestaion_Increased | 17.0 | 32.0 | 1.0 | 16.0 | 72.0 |
| Protected_Areas_Hinderarnce_Livelihood | 83.0 | 19.0 | NaN | 14.0 | 15.0 |
| New_Livelihood_Projects | 19.0 | 15.0 | 10.0 | 58.0 | 36.0 |
| Subprojects_Sustainability_Contribution | 15.0 | 4.0 | 3.0 | 37.0 | 81.0 |
7.3 Visualising the Results¶
The results were visualised in form of number of responses
%matplotlib inline
plot_likert.plot_counts(all_counts2, plot_likert.scales.agree, plot_percentage=False, bar_labels=True, bar_labels_color="snow", colors=plot_likert.colors.default_with_darker_neutral)
plt.title("Figure 1: The Total Number of Responses to Variables on Sustainable Transformation", fontsize=14)
plt.show()
C:\Users\nazin\AppData\Local\anaconda3\envs\NLTK_Py_3_12\Lib\site-packages\plot_likert\plot_likert.py:101: FutureWarning: parameter `plot_percentage` for `plot_likert.likert_counts` is deprecated, set it to None and use `compute_percentages` instead warn(
7.4 Converting the Responses to Percentages¶
The responses were converted to percentages
all_counts3 = df3.apply(lambda col: col.value_counts(normalize=True, dropna=False).round(2))
all_counts3
| Cultural_Practices_Hinder | Cultural_Practices_Changed | Cultural_Aspects_Considered | Measure_Indicators | Livilihood_Depenedent | Change_Livelihood | Change_Livelihood_Easy | Ecosystem_Services_Reduced | Deforestaion_Increased | Protected_Areas_Hinderarnce_Livelihood | New_Livelihood_Projects | Subprojects_Sustainability_Contribution | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Agree_Likert | 0.13 | 0.19 | 0.13 | 0.19 | 0.23 | 0.26 | 0.31 | 0.26 | 0.11 | 0.09 | 0.39 | 0.25 |
| Disagree_Likert | 0.15 | 0.15 | 0.13 | 0.03 | 0.17 | 0.25 | 0.16 | 0.09 | 0.21 | 0.13 | 0.10 | 0.03 |
| Strongly_Agree_Likert | 0.14 | 0.19 | 0.15 | 0.54 | 0.29 | 0.25 | 0.33 | 0.48 | 0.48 | 0.10 | 0.24 | 0.54 |
| Strongly_Disagree_Likert | 0.47 | 0.27 | 0.44 | 0.15 | 0.21 | 0.13 | 0.12 | 0.11 | 0.11 | 0.55 | 0.13 | 0.10 |
| Undecided_Likert | 0.03 | 0.06 | 0.05 | 0.05 | 0.03 | 0.03 | 0.03 | 0.02 | 0.01 | NaN | 0.07 | 0.02 |
| NaN | 0.09 | 0.13 | 0.11 | 0.05 | 0.07 | 0.07 | 0.06 | 0.05 | 0.08 | 0.13 | 0.08 | 0.07 |
7.4.1 Transposing the Dataframe Table¶
Interchanging the rows and columns
all_counts4 = all_counts3.head().T
all_counts4
| Agree_Likert | Disagree_Likert | Strongly_Agree_Likert | Strongly_Disagree_Likert | Undecided_Likert | |
|---|---|---|---|---|---|
| Cultural_Practices_Hinder | 0.13 | 0.15 | 0.14 | 0.47 | 0.03 |
| Cultural_Practices_Changed | 0.19 | 0.15 | 0.19 | 0.27 | 0.06 |
| Cultural_Aspects_Considered | 0.13 | 0.13 | 0.15 | 0.44 | 0.05 |
| Measure_Indicators | 0.19 | 0.03 | 0.54 | 0.15 | 0.05 |
| Livilihood_Depenedent | 0.23 | 0.17 | 0.29 | 0.21 | 0.03 |
| Change_Livelihood | 0.26 | 0.25 | 0.25 | 0.13 | 0.03 |
| Change_Livelihood_Easy | 0.31 | 0.16 | 0.33 | 0.12 | 0.03 |
| Ecosystem_Services_Reduced | 0.26 | 0.09 | 0.48 | 0.11 | 0.02 |
| Deforestaion_Increased | 0.11 | 0.21 | 0.48 | 0.11 | 0.01 |
| Protected_Areas_Hinderarnce_Livelihood | 0.09 | 0.13 | 0.10 | 0.55 | NaN |
| New_Livelihood_Projects | 0.39 | 0.10 | 0.24 | 0.13 | 0.07 |
| Subprojects_Sustainability_Contribution | 0.25 | 0.03 | 0.54 | 0.10 | 0.02 |
7.4.2 Changing the Order of Columns¶
The order of columns was changed
all_counts5 = all_counts4.iloc[:, [3, 1, 4, 0, 2]]
all_counts5
| Strongly_Disagree_Likert | Disagree_Likert | Undecided_Likert | Agree_Likert | Strongly_Agree_Likert | |
|---|---|---|---|---|---|
| Cultural_Practices_Hinder | 0.47 | 0.15 | 0.03 | 0.13 | 0.14 |
| Cultural_Practices_Changed | 0.27 | 0.15 | 0.06 | 0.19 | 0.19 |
| Cultural_Aspects_Considered | 0.44 | 0.13 | 0.05 | 0.13 | 0.15 |
| Measure_Indicators | 0.15 | 0.03 | 0.05 | 0.19 | 0.54 |
| Livilihood_Depenedent | 0.21 | 0.17 | 0.03 | 0.23 | 0.29 |
| Change_Livelihood | 0.13 | 0.25 | 0.03 | 0.26 | 0.25 |
| Change_Livelihood_Easy | 0.12 | 0.16 | 0.03 | 0.31 | 0.33 |
| Ecosystem_Services_Reduced | 0.11 | 0.09 | 0.02 | 0.26 | 0.48 |
| Deforestaion_Increased | 0.11 | 0.21 | 0.01 | 0.11 | 0.48 |
| Protected_Areas_Hinderarnce_Livelihood | 0.55 | 0.13 | NaN | 0.09 | 0.10 |
| New_Livelihood_Projects | 0.13 | 0.10 | 0.07 | 0.39 | 0.24 |
| Subprojects_Sustainability_Contribution | 0.10 | 0.03 | 0.02 | 0.25 | 0.54 |
7.4.3 Visualising the Results¶
The results are visualised as pecentages
%matplotlib inline
plot_likert.plot_counts(all_counts5, plot_likert.scales.agree, plot_percentage=True, figsize=(16, 7), bar_labels=True, bar_labels_color="snow", colors=plot_likert.colors.default_with_darker_neutral)
plt.title("Figure 2: The Percentage Number of Responses to Variables on Sustainable Transformation", fontsize=18)
plt.savefig("Likertscale.jpg")
plt.savefig("Likertscale1.png", dpi=300)
plt.show()
C:\Users\nazin\AppData\Local\anaconda3\envs\NLTK_Py_3_12\Lib\site-packages\plot_likert\plot_likert.py:101: FutureWarning: parameter `plot_percentage` for `plot_likert.likert_counts` is deprecated, set it to None and use `compute_percentages` instead warn(
8. Defined or Responses with Choices¶
The responses that had choices other than those with a likert scale were grouped in a pandas dataframe looking the following question:
- '50. What is the main purpose of landscapes (Forests, Water Bodies, Wetlands⦠etc.) in your livelihood?': 'Purpose_Landscape',
- '52. How is your connection to nature like?': 'Connection_Nature',
- '54. How long have you worked on this livelihood project?': 'Range_Years',
- '64. Which livelihood is a major contributor to landscape transformation?': 'Contributor_Landscape_Transformation',
- '66. Which type of landscape do you depend on much for a livelihood?': 'Landscape_Depended_Livelihood'
df4=df2.drop(df2.columns[[0,1,2,3,4,5,6,7,8,9,10,11,12,14,16,18,19,20,21,22,23,24,25,26,28,30,31,32,33,34]], axis = 1)
#df4
8.1 The Purpose of Landscape to People¶
PL = df4['Purpose_Landscape'].value_counts(dropna=False)
PL_P = (df4['Purpose_Landscape'].value_counts(normalize=True, dropna=False).round(2))
PL_P
Purpose_Landscape Nature_Protection_Purpose 0.62 NaN 0.22 Source_Income_Purpose 0.11 Nature_Protection_Purpose,Other_Purpose 0.01 Nature_Protection_Purpose,Ancestral_Shrines_Purpose 0.01 Source_Income_Purpose,Nature_Protection_Purpose 0.01 Nature_Protection_Purpose,Source_Income_Purpose 0.01 Source_Income_Purpose,Other_Purpose 0.01 No_Idea_Purpose 0.01 Other_Purpose 0.01 Name: proportion, dtype: float64
df_PL = pd.DataFrame(PL)
df_PL
| count | |
|---|---|
| Purpose_Landscape | |
| Nature_Protection_Purpose | 93 |
| NaN | 33 |
| Source_Income_Purpose | 16 |
| Nature_Protection_Purpose,Other_Purpose | 2 |
| Nature_Protection_Purpose,Ancestral_Shrines_Purpose | 1 |
| Source_Income_Purpose,Nature_Protection_Purpose | 1 |
| Nature_Protection_Purpose,Source_Income_Purpose | 1 |
| Source_Income_Purpose,Other_Purpose | 1 |
| No_Idea_Purpose | 1 |
| Other_Purpose | 1 |
ax = sns.countplot(df4["Purpose_Landscape"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 3: Number of Responses on the Purpose of the Landscape", fontsize=14)
plt.show()
8.2 The Connection of Nature to People¶
CN = df4['Connection_Nature'].value_counts(dropna=False)
CN_P = (df4['Connection_Nature'].value_counts(normalize=True, dropna=False).round(2))
CN_P
Connection_Nature Material_Connection 0.82 NaN 0.09 Other_Connection 0.04 Experiential_Connection 0.01 Philosophical_Connection,Psychological_Connection 0.01 Psychological_Connection,Material_Connection 0.01 Psychological_Connection 0.01 Philosophical_Connection 0.01 Experiential_Connection,Material_Connection 0.01 Name: proportion, dtype: float64
df_CN = pd.DataFrame(CN)
df_CN
| count | |
|---|---|
| Connection_Nature | |
| Material_Connection | 123 |
| NaN | 13 |
| Other_Connection | 6 |
| Experiential_Connection | 2 |
| Philosophical_Connection,Psychological_Connection | 2 |
| Psychological_Connection,Material_Connection | 1 |
| Psychological_Connection | 1 |
| Philosophical_Connection | 1 |
| Experiential_Connection,Material_Connection | 1 |
ax = sns.countplot(df4["Connection_Nature"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 4: The Number of Responses to Connection to Nature", fontsize=14)
plt.show()
agreement_levels = ["Material_Connection", "Other_Connection"]
CN_R = df2[df2["Connection_Nature"].isin(agreement_levels)]
CN_R1 = CN_R.drop(CN_R.columns[[0,1,2,3,4,6,7,8,9,10,11,12,13,14,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CN_R1grouped = CN_R1.groupby('Name_Main_Project')['Connection_Nature']
#CN_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CN_R1.iterrows():
CN_R1_filter_sentence = []
CN_R1_sentence = row["Connection_Nature_Reasons"]
if pd.isnull(CN_R1_sentence):
continue
CN_R1_sentence_cleaned = re.sub(r'[^\w\s]','',CN_R1_sentence)
CN_R1_words = nltk.word_tokenize(CN_R1_sentence_cleaned)
CN_R1_words = [lemmatizer.lemmatize(w) for w in CN_R1_words if w.lower() not in stop_words]
CN_R1_filter_sentence.extend(CN_R1_words)
print(CN_R1_filter_sentence)
['use', 'income'] ['source', 'income', 'future', 'generation', 'see'] ['Source', 'income'] ['source', 'income'] ['Thats', 'get', 'income', 'well', 'example', 'cattle', 'use', 'cultivating'] ['game', 'park', 'help', 'u', 'source', 'income', 'form', 'meat', 'animal', 'tree', 'rain', 'come', 'protected'] ['need', 'keep', 'fish', 'instance', 'fish', 'pond', 'get', 'extinct'] ['one', 'look', 'daily', 'basis'] ['source', 'livelihood'] ['need', 'protect', 'nature', 'instance', 'cutting', 'tree', 'along', 'river', 'lead', 'drying', 'destruction', 'animal', 'biodiversity', 'river'] ['answer'] ['purpose', 'future', 'meet', 'need'] ['source', 'income'] ['tree', 'help', 'u', 'bring', 'rainfall'] ['Source', 'income'] ['harvest', 'right', 'time', 'prevent', 'management', 'natural', 'resource', 'properly'] ['taking', 'care'] ['Source', 'income'] ['Source', 'income'] ['bring', 'development', 'tourism'] ['income'] ['natural', 'resource', 'like', 'tree', 'give', 'shade', 'medicine', 'also', 'get', 'fresh', 'air'] ['Like', 'water', 'source', 'life', 'tree', 'source', 'fresh', 'water'] ['source', 'income'] ['source', 'income'] ['ZAWA', 'Officers', 'one', 'connected', 'look'] ['source', 'income', 'tourism'] ['answer'] ['help', 'u', 'source', 'food', 'well', 'water'] ['tree', 'protected', 'lead', 'rainfall', 'area'] ['source', 'livelihood'] ['Somehow', 'protect'] ['help', 'lot', 'thing', 'air', 'breath'] ['protect', 'u', 'instance', 'tree', 'protect', 'wind', 'bring', 'fresh', 'air', 'well', 'prevent', 'river', 'drying'] ['Thats', 'get', 'honey', 'bee', 'hive', 'u', 'lot', 'money'] ['Thats', 'get', 'free', 'air', 'traditional', 'medicine', 'livelihood', 'depend', 'natural', 'resource'] ['instance', 'fish', 'caught', 'brings', 'income', 'source', 'food', 'well', 'tree', 'bring', 'rainfall', 'indirectly'] []
CN_R1["Connection_Nature_Reasons"] = CN_R1["Connection_Nature_Reasons"].fillna("")
CN_R1["Connection_Nature_Reasons"] = CN_R1["Connection_Nature_Reasons"].astype(str)
CN_R1_Text = " ".join(CN_R1["Connection_Nature_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CN_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Connection Nature", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
CN_R1_bigrams_list = list(CN_R1_filter_sentence)
print(CN_R1_bigrams_list)
#CN_R1_bigram_counts = Counter(zip(bigrams_list, CN_R1_bigrams_list[1:]))
#print(CN_R1_bigram_counts)
#CN_R1_bigrams = pd.DataFrame(CN_R1_bigram_counts.most_common(7),
#columns = ['Word', 'Frequency'])
#print(CN_R1_bigrams)
[]
8.3 The Length of Existence of the Livelihood Project¶
RY_grouped = df2.groupby('Name_Main_Project')['Range_Years'].value_counts(dropna=False)
RY_grouped
Name_Main_Project Range_Years
EbA_CENTRAL_MUCHINGA_LUAPULA 3to4Years_Long 2
NaN 1
Ecosystem Conservation_NORTH_WESTERN 3to4Years_Long 8
NaN 2
7to8Years_Long 1
PIN_WESTERN 3to4Years_Long 5
NaN 1
SCRALA_SOUTHERN_WESTERN_NORTHEN 3to4Years_Long 12
5to6Years_Long 9
Lessthan2Years_Long 6
Greaterthan9Years_Long 2
7to8Years_Long 1
SCReBS_WESTERN 5to6Years_Long 5
7to8Years_Long 5
Greaterthan9Years_Long 3
3to4Years_Long 2
SCRiKA_LS 5to6Years_Long 31
3to4Years_Long 6
NaN 5
7to8Years_Long 2
TRALARD_LNM 3to4Years_Long 28
5to6Years_Long 5
Lessthan2Years_Long 3
Greaterthan9Years_Long 2
NaN 2
Name: count, dtype: int64
RY_grouped1 = pd.DataFrame(RY_grouped)
RY_grouped1
| count | ||
|---|---|---|
| Name_Main_Project | Range_Years | |
| EbA_CENTRAL_MUCHINGA_LUAPULA | 3to4Years_Long | 2 |
| NaN | 1 | |
| Ecosystem Conservation_NORTH_WESTERN | 3to4Years_Long | 8 |
| NaN | 2 | |
| 7to8Years_Long | 1 | |
| PIN_WESTERN | 3to4Years_Long | 5 |
| NaN | 1 | |
| SCRALA_SOUTHERN_WESTERN_NORTHEN | 3to4Years_Long | 12 |
| 5to6Years_Long | 9 | |
| Lessthan2Years_Long | 6 | |
| Greaterthan9Years_Long | 2 | |
| 7to8Years_Long | 1 | |
| SCReBS_WESTERN | 5to6Years_Long | 5 |
| 7to8Years_Long | 5 | |
| Greaterthan9Years_Long | 3 | |
| 3to4Years_Long | 2 | |
| SCRiKA_LS | 5to6Years_Long | 31 |
| 3to4Years_Long | 6 | |
| NaN | 5 | |
| 7to8Years_Long | 2 | |
| TRALARD_LNM | 3to4Years_Long | 28 |
| 5to6Years_Long | 5 | |
| Lessthan2Years_Long | 3 | |
| Greaterthan9Years_Long | 2 | |
| NaN | 2 |
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Lessthan2Years_Long", "3to4Years_Long", "5to6Years_Long", "7to8Years_Long", "Greaterthan9Years_Long", "NaN"]
ax = sns.barplot(data = RY_grouped1, x="count", y="Name_Main_Project", hue="Range_Years", hue_order=hue_order, legend=True)
ax.set_title("Figure 5b: The Number of Livelihood Projects in a Particular Range of Year", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
LP = df4['Range_Years'].value_counts(dropna=False)
LP_P = (df4['Range_Years'].value_counts(normalize=True, dropna=False).round(2))
LP_P
Range_Years 3to4Years_Long 0.42 5to6Years_Long 0.33 NaN 0.08 Lessthan2Years_Long 0.06 7to8Years_Long 0.06 Greaterthan9Years_Long 0.05 Name: proportion, dtype: float64
df_LP = pd.DataFrame(LP)
df_LP
| count | |
|---|---|
| Range_Years | |
| 3to4Years_Long | 63 |
| 5to6Years_Long | 50 |
| NaN | 12 |
| Lessthan2Years_Long | 9 |
| 7to8Years_Long | 9 |
| Greaterthan9Years_Long | 7 |
ax = sns.countplot(df4["Range_Years"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 5b: The Number of Livelihood Projects in a Particular Range of Year", fontsize=14)
plt.show()
8.4 Major Contributor to Landscape Transformation¶
CLT_grouped = df2.groupby('Name_Main_Project')['Contributor_Landscape_Transformation'].value_counts(dropna=False)
CLT_grouped
Name_Main_Project Contributor_Landscape_Transformation
EbA_CENTRAL_MUCHINGA_LUAPULA Land_Agriculture 3
Ecosystem Conservation_NORTH_WESTERN Land_Agriculture 3
Uncontrolled_Fires 3
NaN 3
Uncontrolled_Fires,Land_Agriculture 1
Wood_Extraction 1
PIN_WESTERN Uncontrolled_Fires 3
Wood_Extraction 2
NaN 1
SCRALA_SOUTHERN_WESTERN_NORTHEN Uncontrolled_Fires 19
Wood_Extraction 7
Land_Agriculture 2
NaN 2
SCReBS_WESTERN Uncontrolled_Fires 7
Wood_Extraction 5
Land_Agriculture 2
Other 1
SCRiKA_LS Uncontrolled_Fires 13
Land_Agriculture 12
Wood_Extraction 11
NaN 6
Uncontrolled_Grazing 2
TRALARD_LNM Land_Agriculture 22
Wood_Extraction 13
Uncontrolled_Fires 4
NaN 1
Name: count, dtype: int64
CLT_grouped1 = pd.DataFrame(CLT_grouped)
CLT_grouped1
| count | ||
|---|---|---|
| Name_Main_Project | Contributor_Landscape_Transformation | |
| EbA_CENTRAL_MUCHINGA_LUAPULA | Land_Agriculture | 3 |
| Ecosystem Conservation_NORTH_WESTERN | Land_Agriculture | 3 |
| Uncontrolled_Fires | 3 | |
| NaN | 3 | |
| Uncontrolled_Fires,Land_Agriculture | 1 | |
| Wood_Extraction | 1 | |
| PIN_WESTERN | Uncontrolled_Fires | 3 |
| Wood_Extraction | 2 | |
| NaN | 1 | |
| SCRALA_SOUTHERN_WESTERN_NORTHEN | Uncontrolled_Fires | 19 |
| Wood_Extraction | 7 | |
| Land_Agriculture | 2 | |
| NaN | 2 | |
| SCReBS_WESTERN | Uncontrolled_Fires | 7 |
| Wood_Extraction | 5 | |
| Land_Agriculture | 2 | |
| Other | 1 | |
| SCRiKA_LS | Uncontrolled_Fires | 13 |
| Land_Agriculture | 12 | |
| Wood_Extraction | 11 | |
| NaN | 6 | |
| Uncontrolled_Grazing | 2 | |
| TRALARD_LNM | Land_Agriculture | 22 |
| Wood_Extraction | 13 | |
| Uncontrolled_Fires | 4 | |
| NaN | 1 |
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Land_Agriculture", "Wood_Extraction", "Uncontrolled_Fires", "Uncontrolled_Grazing ", "Other", "NaN"]
ax = sns.barplot(data = CLT_grouped1, x="count", y="Name_Main_Project", hue="Contributor_Landscape_Transformation", hue_order=hue_order, legend=True)
ax.set_title("Figure 6a: Major Contributor to Landscape Transformation", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
CLT = df4['Contributor_Landscape_Transformation'].value_counts(dropna=False)
CLT_P = (df4['Contributor_Landscape_Transformation'].value_counts(normalize=True, dropna=False).round(2))
CLT_P
Contributor_Landscape_Transformation Uncontrolled_Fires 0.33 Land_Agriculture 0.29 Wood_Extraction 0.26 NaN 0.09 Uncontrolled_Grazing 0.01 Uncontrolled_Fires,Land_Agriculture 0.01 Other 0.01 Name: proportion, dtype: float64
df_CLT = pd.DataFrame(CLT)
df_CLT
| count | |
|---|---|
| Contributor_Landscape_Transformation | |
| Uncontrolled_Fires | 49 |
| Land_Agriculture | 44 |
| Wood_Extraction | 39 |
| NaN | 14 |
| Uncontrolled_Grazing | 2 |
| Uncontrolled_Fires,Land_Agriculture | 1 |
| Other | 1 |
ax = sns.countplot(df4["Contributor_Landscape_Transformation"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 6b: Major Contributor to Landscape Transformation", fontsize=14)
plt.show()
agreement_levels = ["Wood_Extration", "Land_Agriculture","Uncontrolled_Fires"]
CLT_R = df2[df2["Contributor_Landscape_Transformation"].isin(agreement_levels)]
CLT_R1 = CN_R.drop(CLT_R.columns[[0,1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,29,30,31,32,33,34]], axis = 1)
CLT_R1grouped = CLT_R1.groupby('Name_Main_Project')['Contributor_Landscape_Transformation']
#CLT_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CLT_R1.iterrows():
CLT_R1_filter_sentence = []
CLT_R1_sentence = row["Contributor_Landscape_Transformation_Reasons"]
if pd.isnull(CLT_R1_sentence):
continue
CLT_R1_sentence_cleaned = re.sub(r'[^\w\s]','',CLT_R1_sentence)
CLT_R1_words = nltk.word_tokenize(CLT_R1_sentence_cleaned)
CLT_R1_words = [lemmatizer.lemmatize(w) for w in CLT_R1_words if w.lower() not in stop_words]
CLT_R1_filter_sentence.extend(CLT_R1_words)
print(CLT_R1_filter_sentence)
['requires', 'huge', 'land', 'others', 'like', 'making', 'charcoal'] ['burning', 'charcoal', 'give', 'smoke', 'destroys', 'ozone', 'layer', 'well', 'brings', 'acidic', 'rain'] ['agriculture', 'activity', 'food'] ['electricity', 'thus', 'huge', 'demand', 'energy'] ['charcoal', 'purpose', 'bad', 'land', 'left', 'bare', 'unlike', 'agriculture', 'cutting', 'replaced', 'plant'] ['cultivation', 'cassava', 'requires', 'always', 'barren', 'land', 'never', 'cultivated', 'grow', 'well', 'crop', 'bean', 'vegetable', 'maize', 'reduce', 'deforestation'] ['burn', 'tree', 'shoot'] ['Chitemene', 'system', 'cultivation', 'lead', 'deforestation', 'others'] ['source', 'food', 'people', 'area'] ['Thats', 'get', 'livelihood'] ['source', 'income'] ['time', 'cut', 'tree', 'even', 'want', 'cultivate', 'well', 'burining', 'kill', 'animal', 'necesary', 'making', 'soil', 'fertile'] ['Buring', 'destroy', 'product', 'crop', 'soil'] ['cutting', 'treed', 'change', 'landscape'] ['cutting', 'tress', 'destroys', 'much', 'regrowth'] ['u', 'income', 'huge', 'area', 'tree', 'cut'] ['commercial', 'agriculture', 'activity', 'clear', 'huge', 'chuck', 'land'] ['Source', 'income'] ['forming', 'og', 'food', 'crop'] ['source', 'income'] ['people', 'plant', 'huge', 'area', 'land', 'livelihood'] ['animal', 'food', 'well', 'tree', 'would', 'dry'] ['income', 'food', 'crop'] ['farming', 'activity', 'brings', 'income'] ['soil', 'get', 'degraded', 'movebto', 'another', 'portion', 'land'] ['source', 'income'] ['people', 'burn', 'agriculture', 'area', 'looking', 'rat', 'addition', 'burn', 'food', 'crop', 'due', 'search', 'rat'] ['cutting', 'tree', 'destroy', 'change', 'landscape', 'livelihood'] ['tree', 'challenge', 'growung', 'burned'] ['people', 'cutting', 'tree', 'charcoal', 'destroy', 'difficult', 'regeneration'] ['Source', 'income'] ['tree', 'dry', 'burnt'] ['fire', 'cause', 'lot', 'damage', 'biodiversity', 'well', 'plant'] ['tree', 'get', 'burn', 'reducing', 'regeneration', 'well', 'fertility', 'soil'] ['tree', 'get', 'destroyed', 'burnt', 'difficult', 'regenerate'] ['cutting', 'tree', 'charcoal', 'prevents', 'regeneration'] ['cutting', 'tree', 'charcoal', 'requires', 'huge', 'land', 'compared', 'farm', 'one', 'partition', 'land'] ['small', 'biodiversity', 'destroyed', 'fire'] ['Burning', 'destroys', 'tree'] ['fire', 'burn', 'tree', 'lead', 'dry'] ['people', 'cutting', 'huge', 'chuck', 'land', 'cultivation', 'crop'] ['burning', 'lot', 'thing', 'like', 'snake', 'house', 'important', 'biodiversity', 'destroyed'] ['source', 'livelihood'] ['Thats', 'source', 'food', 'crop'] ['people', 'cutting', 'tree', 'anyhow'] ['cutting', 'tree', 'charcoal', 'finish', 'tree', 'others'] ['burning', 'destroys', 'fertility', 'soil'] ['reason', 'told', 'start', 'conservation', 'farming', 'entail', 'farming', 'locality'] ['Chitemene', 'system', 'burning', 'disallowed'] ['main', 'purpose', 'livelihood'] ['fire', 'destroys', 'lot', 'thing', 'air', 'breathe', 'soil', 'fertility', 'small', 'animal', 'plant'] ['land', 'cultivated', 'done', 'big', 'land', 'transforms', 'landscape'] ['fire', 'destroys', 'flower', 'production', 'honey', 'reduced'] ['people', 'make', 'charcoal', 'cut', 'tree', 'fresh', 'cut', 'huge', 'area'] ['destroys', 'lot', 'biodiversity', 'egg', 'bird', 'snake'] ['livelihood', 'depends', 'activity', 'charcoal', 'burning'] ['animal', 'always', 'grazing', 'vegetation', 'room', 'given', 'plant', 'sprout'] ['tree', 'cut', 'charcoal', 'stem', 'dy', 'replacement'] ['Fire', 'destroys', 'everything', 'others', 'even', 'biodiversity', 'get', 'killed'] ['fire', 'cut', 'across', 'huge', 'area', 'kill', 'everything', 'way'] ['land', 'used', 'agriculture', 'purpose', 'cutting', 'done', 'large', 'scale'] ['Burning', 'destroys', 'life', 'everything'] ['fire', 'destroys', 'almost', 'everything'] ['cause', 'soil', 'erossion'] ['agriculture', 'activity', 'uprooting', 'tree', 'thus', 'distruction', 'environment'] ['burning', 'cover', 'huge', 'area', 'kill', 'biodiversity', 'way'] ['making', 'charcoal', 'mainly', 'focus', 'big', 'tree', 'make', 'desert', 'area'] ['destroys', 'tree', 'everyone', 'cutting', 'tree', 'energy'] ['tree', 'cut', 'charcoal', 'take', 'time', 'grow', 'thus', 'causing', 'climate', 'change', 'turn', 'affecting', 'main', 'livelihood', 'agriculture'] ['cut', 'tree', 'rainfall', 'reduce', 'lead', 'animal', 'dying', 'thirst'] ['clear', 'law', 'prevents', 'people', 'scared', 'burning', 'bush'] ['huge', 'land', 'cleared', 'garden'] ['continuous', 'process', 'cutting', 'tree', 'charcoal', 'thus', 'destroys', 'landscape'] ['rainfall', 'reduce', 'would', 'much', 'wind', 'well', 'climate', 'change'] ['tree', 'reduced', 'going', 'problem', 'rainfall'] ['lot', 'famers', 'thus', 'major', 'contributor'] ['Source', 'livelihood', 'crop'] ['farmer'] ['burn', 'bush', 'destroys', 'tree', 'well', 'animal'] ['burning', 'make', 'tree', 'dry', 'well', 'young', 'animal', 'get', 'killed'] ['fire', 'distroys', 'almost', 'everything', 'even', 'bird', 'forest', 'destroyed'] ['tradition', 'cultivation', 'survive', 'tradition', 'burning', 'destroys', 'lot', 'thing', 'hence', 'major', 'contributor'] ['burning', 'bush', 'destroys', 'environment'] ['animal', 'depend', 'grass', 'thus', 'burnt', 'animal', 'would', 'die'] ['depend', 'farming', 'livelihood'] ['charcoal', 'people', 'burn', 'big', 'tree', 'destroys', 'habitat', 'animal'] ['Anyone', 'burn', 'bush', 'would', 'burn', 'forest', 'fire', 'guard', 'prevent', 'forest', 'burnt'] ['knowledge', 'protect', 'environment', 'people', 'thing', 'without', 'knowledge', 'protect', 'environment', 'thus', 'need', 'protect'] ['grass', 'grazing', 'animal', 'would', 'destroyed'] ['destroys', 'food', 'elephant', 'depends', 'burnt'] ['Fire', 'destroys', 'habitat', 'animal'] ['animal', 'feed', 'grass', 'grass', 'burnt', 'animal', 'come', 'community', 'disturb'] ['burn', 'destroys', 'habitat', 'animal'] ['Cutting', 'tree', 'come', 'strong', 'wind'] ['Thats', 'food', 'come'] ['Cutting', 'tree', 'make', 'gardenmatema', 'meaning', 'tree', 'replaced'] ['Fire', 'burn', 'everything', 'term', 'life'] ['fire', 'destroys', 'grazing', 'grass', 'animal'] ['People', 'always', 'burning', 'bush', 'without', 'control'] ['destroys', 'life'] ['Fire', 'burn', 'whole', 'area'] ['Burning', 'kill', 'lot', 'thing', 'biodiversity'] ['tree', 'cut', 'lot', 'thus', 'distructs', 'lot'] ['tree', 'cut', 'grow'] ['cutting', 'tree', 'brings', 'drought'] ['tree', 'cut', 'area', 'possing', 'treat', 'strong', 'wind'] ['rain', 'water', 'flow', 'due', 'lack', 'tree', 'block'] ['lead', 'climate', 'due', 'lack', 'tree'] ['animal', 'graz', 'thus', 'make', 'move', 'long', 'distance', 'find', 'pasture'] ['rainfall', 'back', 'green', 'vegetation', 'thus', 'burnt', 'less', 'rainfall'] ['tree', 'get', 'burnt', 'dry', 'thats', 'reason', 'less', 'rainfall'] ['use', 'Chitemene', 'system', 'cutting', 'tree', 'charcoal'] ['tree', 'cut', 'made', 'charcoal', 'trunk', 'stem', 'grow'] ['destroys', 'everything', 'way'] ['destroys', 'everything', 'path']
CLT_R1["Contributor_Landscape_Transformation_Reasons"] = CLT_R1["Contributor_Landscape_Transformation_Reasons"].fillna("")
CLT_R1["Contributor_Landscape_Transformation_Reasons"] = CLT_R1["Contributor_Landscape_Transformation_Reasons"].astype(str)
CLT_R1_Text = " ".join(CLT_R1["Contributor_Landscape_Transformation_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CLT_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Contributor Landscape Transformation", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
8.5 The Landscape Depended on for a Livelihood¶
LDL_grouped = df2.groupby('Name_Main_Project')['Landscape_Depended_Livelihood'].value_counts(dropna=False)
LDL_grouped
Name_Main_Project Landscape_Depended_Livelihood
EbA_CENTRAL_MUCHINGA_LUAPULA Agriculture_Areas_Dependent 3
Ecosystem Conservation_NORTH_WESTERN NaN 5
Agriculture_Areas_Dependent 3
Forest_Dependent 2
Wetlands_Dependent 1
PIN_WESTERN Agriculture_Areas_Dependent 5
NaN 1
SCRALA_SOUTHERN_WESTERN_NORTHEN Agriculture_Areas_Dependent 21
Wetlands_Dependent 6
Forest_Dependent 2
NaN 1
SCReBS_WESTERN Agriculture_Areas_Dependent 10
Wetlands_Dependent 5
SCRiKA_LS Agriculture_Areas_Dependent 30
Forest_Dependent 4
Wetlands_Dependent 4
NaN 4
Wetlands_Dependent,Forest_Dependent 2
TRALARD_LNM Agriculture_Areas_Dependent 20
Wetlands_Dependent 9
Forest_Dependent 5
Wetlands_Dependent,Forest_Dependent 2
Agriculture_Areas_Dependent,Wetlands_Dependent 1
Forest_Dependent,Wetlands_Dependent 1
Wetlands_Dependent,Agriculture_Areas_Dependent 1
NaN 1
Name: count, dtype: int64
LDL_grouped1 = pd.DataFrame(LDL_grouped)
LDL_grouped1
| count | ||
|---|---|---|
| Name_Main_Project | Landscape_Depended_Livelihood | |
| EbA_CENTRAL_MUCHINGA_LUAPULA | Agriculture_Areas_Dependent | 3 |
| Ecosystem Conservation_NORTH_WESTERN | NaN | 5 |
| Agriculture_Areas_Dependent | 3 | |
| Forest_Dependent | 2 | |
| Wetlands_Dependent | 1 | |
| PIN_WESTERN | Agriculture_Areas_Dependent | 5 |
| NaN | 1 | |
| SCRALA_SOUTHERN_WESTERN_NORTHEN | Agriculture_Areas_Dependent | 21 |
| Wetlands_Dependent | 6 | |
| Forest_Dependent | 2 | |
| NaN | 1 | |
| SCReBS_WESTERN | Agriculture_Areas_Dependent | 10 |
| Wetlands_Dependent | 5 | |
| SCRiKA_LS | Agriculture_Areas_Dependent | 30 |
| Forest_Dependent | 4 | |
| Wetlands_Dependent | 4 | |
| NaN | 4 | |
| Wetlands_Dependent,Forest_Dependent | 2 | |
| TRALARD_LNM | Agriculture_Areas_Dependent | 20 |
| Wetlands_Dependent | 9 | |
| Forest_Dependent | 5 | |
| Wetlands_Dependent,Forest_Dependent | 2 | |
| Agriculture_Areas_Dependent,Wetlands_Dependent | 1 | |
| Forest_Dependent,Wetlands_Dependent | 1 | |
| Wetlands_Dependent,Agriculture_Areas_Dependent | 1 | |
| NaN | 1 |
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Agriculture_Areas_Dependent", "Wetlands_Dependent", "Forest_Dependent", "NaN"]
ax = sns.barplot(data = LDL_grouped1, x="count", y="Name_Main_Project", hue="Landscape_Depended_Livelihood", hue_order=hue_order, legend=True)
ax.set_title("Figure 7a: Landscape Dependent on by Livelihoods", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
LDL = df4['Landscape_Depended_Livelihood'].value_counts(dropna=False)
LDL_P = (df4['Landscape_Depended_Livelihood'].value_counts(normalize=True, dropna=False).round(2))
LDL_P
Landscape_Depended_Livelihood Agriculture_Areas_Dependent 0.61 Wetlands_Dependent 0.17 Forest_Dependent 0.09 NaN 0.09 Wetlands_Dependent,Forest_Dependent 0.03 Agriculture_Areas_Dependent,Wetlands_Dependent 0.01 Forest_Dependent,Wetlands_Dependent 0.01 Wetlands_Dependent,Agriculture_Areas_Dependent 0.01 Name: proportion, dtype: float64
df_LDL = pd.DataFrame(LDL)
df_LDL
| count | |
|---|---|
| Landscape_Depended_Livelihood | |
| Agriculture_Areas_Dependent | 92 |
| Wetlands_Dependent | 25 |
| Forest_Dependent | 13 |
| NaN | 13 |
| Wetlands_Dependent,Forest_Dependent | 4 |
| Agriculture_Areas_Dependent,Wetlands_Dependent | 1 |
| Forest_Dependent,Wetlands_Dependent | 1 |
| Wetlands_Dependent,Agriculture_Areas_Dependent | 1 |
ax = sns.countplot(df4["Landscape_Depended_Livelihood"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 7b: Landscape Dependent on by Livelihoods", fontsize=14)
plt.show()
agreement_levels = ["Agriculture_Areas_Dependent", "Wetlands_Dependent","Forest_Dependent"]
LDL_R = df2[df2["Landscape_Depended_Livelihood"].isin(agreement_levels)]
LDL_R1 = LDL_R.drop(LDL_R.columns[[0,1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,31,32,33,34]], axis = 1)
LDL_R1grouped = LDL_R1.groupby('Name_Main_Project')['Landscape_Depended_Livelihood']
#LDL_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in LDL_R1.iterrows():
LDL_R1_filter_sentence = []
LDL_R1_sentence = row["Landscape_Depeneded_Livelihood_Reasons"]
if pd.isnull(LDL_R1_sentence):
continue
LDL_R1_sentence_cleaned = re.sub(r'[^\w\s]','',LDL_R1_sentence)
LDL_R1_words = nltk.word_tokenize(LDL_R1_sentence_cleaned)
LDL_R1_words = [lemmatizer.lemmatize(w) for w in LDL_R1_words if w.lower() not in stop_words]
LDL_R1_filter_sentence.extend(LDL_R1_words)
print(LDL_R1_filter_sentence)
['cultivates', 'large', 'area', 'land', 'crop', 'sold', 'livelihood'] ['catching', 'fish', 'source', 'income', 'area'] ['Food', 'main', 'source', 'livelihood', 'thus', 'without', 'way', 'earn', 'living'] ['source', 'income', 'well', 'food', 'consumption'] ['farming', 'help', 'sourcing', 'food'] ['Thats', 'source', 'income'] ['source', 'livelihood'] ['income', 'come', 'living'] ['help', 'u', 'cultivate', 'cassava', 'maize', 'much', 'fishing', 'activity'] ['farming', 'obtain', 'food', 'consumption', 'household', 'level'] ['food', 'security', 'come'] ['tree', 'bring', 'rainfall', 'area', 'compared', 'area'] ['source', 'income'] ['majority', 'people', 'cultivate', 'land', 'livelihood', 'depending', 'buying', 'shop'] ['less', 'rainfall', 'help', 'u', 'source', 'water'] ['Thats', 'whats', 'common', 'within', 'area'] ['source', 'traditional', 'medicine'] ['provides', 'water', 'drinking', 'water', 'life'] ['Water', 'source', 'life'] ['source', 'food', 'crop', 'income'] ['use', 'cultivation'] ['Source', 'income'] ['mostly', 'farmer'] ['Thats', 'source', 'income', 'livelihood'] ['lot', 'activity', 'come', 'water'] ['food', 'crop'] ['food', 'crop', 'come'] ['Thats', 'source', 'food', 'crop', 'well', 'income'] ['Thats', 'grow', 'crop', 'livelihood'] ['livelihood', 'farming', 'main', 'stay'] ['provides', 'resource', 'people', 'term', 'wood'] ['obtain', 'food', 'crop', 'assist', 'u', 'livelihood'] ['Water', 'life', 'used', 'water', 'environment', 'tree', 'grow', 'prevent', 'developing', 'desert'] ['source', 'crop', 'food'] ['Thats', 'grow', 'crop', 'livelihood'] ['Source', 'life'] ['portion', 'land', 'cultivating', 'one', 'area', 'thus', 'cutting', 'tree'] ['Thats', 'get', 'food', 'product', 'livelihood'] ['Water', 'life'] ['Water', 'life'] ['Water', 'life'] ['answer'] ['crop', 'income', 'come'] ['Thats', 'get', 'food', 'eat'] ['Source', 'livelihood'] ['Life', 'water'] ['Water', 'life', 'evryone', 'drink'] ['income', 'food', 'crop', 'come'] ['tree', 'rain', 'much', 'well', 'soil', 'fertile'] ['Water', 'life'] ['Thats', 'get', 'food', 'crop', 'survival'] ['every', 'individual', 'depend', 'agriculture', 'land'] ['Water', 'life', 'thus', 'water', 'plant', 'dry'] ['source', 'mushroom', 'catapilars', 'forest', 'protected', 'well', 'working', 'honarary', 'officer'] ['Water', 'life', 'used', 'every', 'situation', 'cultivation', 'watering', 'garden'] ['brings', 'u', 'food', 'well', 'income'] ['flood', 'cattle', 'go', 'forest', 'area', 'graz'] ['Thats', 'get', 'food'] ['farmer'] ['farm', 'product'] ['Thats', 'get', 'food', 'crop'] ['Thats', 'get', 'food', 'crop'] ['get', 'food', 'crop', 'livelihood', 'dependent'] ['livelihood', 'based', 'farming'] ['source', 'source', 'livelihood'] ['get', 'food', 'crop', 'livehoods'] ['area', 'farming', 'found', 'forest', 'area'] ['cattle', 'graze'] ['cultivate', 'source', 'food', 'crop', 'game', 'park', 'help', 'depend', 'ZAWA', 'Officers', 'give', 'resource'] ['farmer', 'nature'] ['Thats', 'source', 'food', 'crop'] ['Thats', 'food', 'come'] ['Thats', 'get', 'food', 'eating', 'livelihood'] ['cultivate', 'get', 'crop'] ['food', 'income', 'come', 'help', 'u'] ['Thats', 'get', 'maize', 'staple', 'food', 'farming'] ['keeping', 'bird', 'gardening', 'thus', 'forest', 'protected', 'well', 'animal'] ['found'] ['Source', 'food', 'crop'] ['wetland', 'dry', 'thus', 'depend', 'agriculture', 'food', 'crop'] ['Thats', 'get', 'crop', 'food'] ['food', 'gotten', 'borehole', 'sank'] ['farmer'] ['Thats', 'animal', 'feed'] ['Thats', 'get', 'income'] ['Everything', 'come', 'agriculture'] ['farm', 'crop'] ['Water', 'source', 'life', 'animal', 'cattle'] ['farmer', 'nature'] ['farmer', 'nature', 'southern', 'province'] ['Thats', 'get', 'income'] ['Water', 'life', 'cattle', 'drink', 'water', 'wetland'] ['Water', 'life', 'domesticated', 'animal', 'need', 'water', 'depend', 'animal'] ['Thats', 'get', 'food', 'crop'] ['Thats', 'get', 'food', 'crop', 'income'] ['agriculture', 'get', 'food', 'crop', 'income'] ['major', 'activity', 'around', 'area'] ['food', 'crop'] ['Thats', 'get', 'food', 'living'] ['Thats', 'get', 'food', 'crop'] ['farmer'] ['Thats', 'get', 'food'] ['farming'] ['farmer'] ['Thats', 'food', 'come'] ['farmer'] ['Thats', 'animal', 'graze'] ['flood', 'go', 'leave', 'moisture', 'thats', 'help', 'people', 'grow', 'crop'] ['Thats', 'grow', 'food', 'crop'] ['wetland', 'cultivate', 'rice', 'well', 'get', 'water', 'watering', 'garden'] ['get', 'food', 'eating'] ['Thats', 'grow', 'crop'] ['Thats', 'get', 'food'] ['Thats', 'food', 'crop', 'come'] ['Thats', 'food', 'crop', 'come'] ['Thats', 'get', 'food', 'crop'] ['Thats', 'get', 'food', 'crop'] ['farmer', 'rice'] ['get', 'food', 'crop'] ['cultivate', 'flood', 'plain', 'animal', 'graze'] ['Thats', 'get', 'food'] ['farmer'] ['get', 'crop'] ['farmer'] ['Thats', 'get', 'food', 'money'] ['food', 'crop', 'found'] ['Thats', 'plant', 'maize', 'rice']
LDL_R1["Landscape_Depeneded_Livelihood_Reasons"] = LDL_R1["Landscape_Depeneded_Livelihood_Reasons"].fillna("")
LDL_R1["Landscape_Depeneded_Livelihood_Reasons"] = LDL_R1["Landscape_Depeneded_Livelihood_Reasons"].astype(str)
LDL_R1_Text = " ".join(LDL_R1["Landscape_Depeneded_Livelihood_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(LDL_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Landscape Depeneded Livelihood", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
8.6 Explanation to Choices¶
It gives an explanation to the choice selected that is not among the choices that were provided in the list of the questionnaire
df5=df2.drop(df2.columns[[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]], axis = 1)
#df5
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in df5.iterrows():
PL_RS1_filter_sentence = []
PL_RS1_sentence = row["Purpose_Landscape_Specific"]
if pd.isnull(PL_RS1_sentence):
continue
PL_RS1_sentence_cleaned = re.sub(r'[^\w\s]','',PL_RS1_sentence)
PL_RS1_words = nltk.word_tokenize(PL_RS1_sentence_cleaned)
PL_RS1_words = [lemmatizer.lemmatize(w) for w in PL_RS1_words if w.lower() not in stop_words]
PL_RS1_filter_sentence.extend(PL_RS1_words)
print(PL_RS1_filter_sentence)
['protection', 'nature', 'others', 'important'] ['fire', 'burn', 'biodiversity', 'allow', 'environmental', 'process'] ['tourist', 'bring', 'income', 'term', 'viewing', 'animal'] ['source', 'income', 'tourism']
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in df5.iterrows():
CN_RS1_filter_sentence = []
CN_RS1_sentence = row["Connection_Nature_Specific"]
if pd.isnull(CN_RS1_sentence):
continue
CN_RS1_sentence_cleaned = re.sub(r'[^\w\s]','',CN_RS1_sentence)
CN_RS1_words = nltk.word_tokenize(CN_RS1_sentence_cleaned)
CN_RS1_words = [lemmatizer.lemmatize(w) for w in CN_RS1_words if w.lower() not in stop_words]
CN_RS1_filter_sentence.extend(CN_RS1_words)
print(CN_RS1_filter_sentence)
['Taking', 'care', 'nature', 'without', 'destroying'] ['Taking', 'care', 'nature'] ['Source', 'good', 'air', 'food', 'product'] ['related'] ['Protection', 'tree'] ['Taking', 'care', 'animal']
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in df5.iterrows():
CLT_RS1_filter_sentence = []
CLT_RS1_sentence = row["Contributor_Landscape_Transformation_Specific"]
if pd.isnull(CLT_RS1_sentence):
continue
CLT_RS1_sentence_cleaned = re.sub(r'[^\w\s]','',CLT_RS1_sentence)
CLT_RS1_words = nltk.word_tokenize(CLT_RS1_sentence_cleaned)
CLT_RS1_words = [lemmatizer.lemmatize(w) for w in CLT_RS1_words if w.lower() not in stop_words]
CLT_RS1_filter_sentence.extend(CLT_RS1_words)
print(CLT_RS1_filter_sentence)
['Cutting', 'tree', 'sale']
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in df5.iterrows():
LDL_RS1_filter_sentence = []
LDL_RS1_sentence = row["Landscape_Depended_Livelihood_Specific"]
if pd.isnull(LDL_RS1_sentence):
continue
LDL_RS1_sentence_cleaned = re.sub(r'[^\w\s]','',LDL_RS1_sentence)
LDL_RS1_words = nltk.word_tokenize(LDL_RS1_sentence_cleaned)
LDL_RS1_words = [lemmatizer.lemmatize(w) for w in LDL_RS1_words if w.lower() not in stop_words]
LDL_RS1_filter_sentence.extend(LDL_RS1_words)
print(LDL_RS1_filter_sentence)
9. Description Statistics¶
It gives a background to the study sites
df6=df2.drop(df2.columns[[0,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
#df6
9.1 Main project Types¶
This shows the number of questionaires that were administered to each project type
MP = df6['Name_Main_Project'].value_counts(dropna=False)
MP
Name_Main_Project SCRiKA_LS 44 TRALARD_LNM 40 SCRALA_SOUTHERN_WESTERN_NORTHEN 30 SCReBS_WESTERN 15 Ecosystem Conservation_NORTH_WESTERN 11 PIN_WESTERN 6 EbA_CENTRAL_MUCHINGA_LUAPULA 3 NaN 1 Name: count, dtype: int64
ax = sns.countplot(df6["Name_Main_Project"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 8: The Total Number of Respondents in each of the Main Project", fontsize=14)
plt.show()
9.2 The Wards¶
The wards that had respodents and their number
WN = df6['Ward_Name'].value_counts(dropna=False)
WN
Ward_Name Namwala Central ward 17 Omba ward 15 Mbila ward 14 Yeta ward 11 Mwanambuyu ward 11 Lulimala ward 11 Isamba ward 10 Kalobolelwa ward 9 Ntonga ward 8 Kalanga ward 8 Moofwe ward 7 Chitimbwa ward 7 Makuya ward 6 Nachikufu ward 5 Luubwe ward 4 Ntambu ward 4 NaN 2 Silunga ward 1 Name: count, dtype: int64
ax = sns.countplot(df6["Ward_Name"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 9: Number of Respondents in each Ward", fontsize=14)
plt.show()
9.3 Number of beneficiaries¶
It shows the number of households that benefited from the interviewees
NB = df6['Number_Beneficiaries'].value_counts(dropna=False)
NB
Number_Beneficiaries Greaterthan40People_Many 40 10to20People_Many 25 NaN 25 20to30People_Many 24 30to40People_Many 23 Lessthan10People_Many 13 Name: count, dtype: int64
ax = sns.countplot(df6["Number_Beneficiaries"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 10: Number of household beneficiaries in each Cohort Category", fontsize=14)
plt.show()
9.4 Size of Landscape¶
SIZE = df6['Size'].value_counts(dropna=False)
SIZE
Size Lessthan4Lima_Size 48 Greaterthan16Lima_Size 40 NotApplicable_Size 31 4to8Lima_Size 13 NaN 10 8to12Lima_Size 5 12to16Lima_Size 3 Name: count, dtype: int64
ax = sns.countplot(df6["Size"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 11: Number of beneficiaries in each Land Size Category", fontsize=14)
plt.show()
9.5 Represing Others¶
It looks at those representing one house and more
RO = df6['Representing_Others'].value_counts(dropna=False)
RO
Representing_Others yes 125 no 23 NaN 2 Name: count, dtype: int64
ax = sns.countplot(df6["Representing_Others"])
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
ax.set_title("Figure 12: Number of beneficiaries Representing Others", fontsize=14)
plt.show()
10. Reasons¶
The pandas dataframe depicts columns for the reasons to the responses to the likert scales
df7=df2.drop(df2.columns[[0,1,2,3,4,5,6,7,8,10,11,13,14,15,17,18,19,20,22,23,24,25,26,27,29,31,32,33,34]], axis = 1)
#df7
10.1 Reasons for Cultural Practices Hidering¶
The reasons for cultural practices that hinder transfromation
lemmatizer = WordNetLemmatizer()
CPH = df7['Cultural_Practices_Hinder_Reason'].str.lower().str.cat(sep=' ')
CPH_words = nltk.tokenize.word_tokenize(CPH)
CPH_filtered_tokens = [word for word in CPH_words if len(CPH_words) >= 8]
CPH_lemmatized_words = [lemmatizer.lemmatize(word) for word in CPH_filtered_tokens]
CPH_token_counts = Counter(CPH_lemmatized_words)
CPH_columns = pd.DataFrame(CPH_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
#print(CPH_columns)
bigrams_list = list(bigrams(CPH_filtered_tokens))
#print(bigrams_list)
CPH_bigram_counts = Counter(zip(bigrams_list, bigrams_list[1:]))
#print(bigram_counts)
CPH_bigrams = pd.DataFrame(CPH_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CPH_bigrams)
Word Frequency
0 ((there, is), (is, nothing)) 73
1 ((is, nothing), (nothing, there)) 55
2 ((nothing, there), (there, is)) 52
3 ((protect, the), (the, environment)) 6
4 ((to, protect), (protect, the)) 6
5 ((there, is), (is, no)) 5
6 ((there, are), (are, no)) 4
7 ((the, environment), (environment, there)) 4
8 ((is, nothing), (nothing, the)) 4
9 ((customary, practices), (practices, that)) 3
10 ((as, well), (well, as)) 3
11 ((are, no), (no, cultural)) 3
12 ((nothing, there), (there, are)) 3
13 ((destroys, the), (the, environment)) 3
14 ((cutting, down), (down, of)) 3
15 ((environment, there), (there, is)) 3
16 ((is, nothing), (nothing, i)) 3
17 ((``, ''), ('', malende)) 3
18 (('', malende), (malende, '')) 3
19 ((malende, ''), ('', '')) 3
20 ((is, no), (no, cultural)) 2
21 ((there, are), (are, rules)) 2
22 ((there, is), (is, a)) 2
23 ((that, the), (the, bush)) 2
24 ((it, is), (is, not)) 2
25 ((are, customary), (customary, practices)) 2
26 ((a, customary), (customary, practice)) 2
27 ((that, destroys), (destroys, the)) 2
28 ((down, of), (of, trees)) 2
29 ((along, the), (the, river)) 2
30 ((a, long), (long, time)) 2
31 ((but, at), (at, the)) 2
32 ((at, the), (the, moment)) 2
33 ((no, cultural), (cultural, practices)) 2
34 ((the, natural), (natural, resources)) 2
35 ((there, are), (are, places)) 2
36 ((nothing, i), (i, have)) 2
37 ((is, nothing), (nothing, our)) 2
38 ((us, to), (to, protect)) 2
39 ((of, the), (the, environment)) 2
40 ((the, environment), (environment, the)) 2
41 ((on, how), (how, to)) 2
42 ((how, to), (to, protect)) 2
43 ((trees, there), (there, is)) 2
44 ((cut, down), (down, trees)) 2
45 ((called, ``), (``, '')) 2
46 (('', ''), ('', that)) 2
47 ((trees, but), (but, the)) 2
48 ((of, the), (the, trees)) 2
49 ((it, has), (has, never)) 1
50 ((has, never), (never, happed)) 1
51 ((never, happed), (happed, before)) 1
52 ((happed, before), (before, in)) 1
53 ((before, in), (in, his)) 1
54 ((in, his), (his, life)) 1
55 ((his, life), (life, time)) 1
56 ((life, time), (time, there)) 1
57 ((time, there), (there, is)) 1
58 ((there, is), (is, need)) 1
59 ((is, need), (need, to)) 1
60 ((need, to), (to, harvest)) 1
61 ((to, harvest), (harvest, trees)) 1
62 ((harvest, trees), (trees, when)) 1
63 ((trees, when), (when, they)) 1
64 ((when, they), (they, have)) 1
65 ((they, have), (have, fully)) 1
66 ((have, fully), (fully, grown)) 1
trigrams_list = list(trigrams(CPH_filtered_tokens))
#print(bigrams_list)
CPH_trigram_counts = Counter(zip(trigrams_list, trigrams_list[1:]))
#print(bigram_counts)
CPH_trigrams = pd.DataFrame(CPH_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CPH_trigrams)
Word \
0 ((there, is, nothing), (is, nothing, there))
1 ((is, nothing, there), (nothing, there, is))
2 ((nothing, there, is), (there, is, nothing))
3 ((to, protect, the), (protect, the, environment))
4 ((there, is, nothing), (is, nothing, the))
5 ((there, are, no), (are, no, cultural))
6 ((is, nothing, there), (nothing, there, are))
7 ((the, environment, there), (environment, there, is))
8 ((environment, there, is), (there, is, nothing))
9 ((there, is, nothing), (is, nothing, i))
10 ((``, '', malende), ('', malende, ''))
11 (('', malende, ''), (malende, '', ''))
12 ((nothing, there, is), (there, is, no))
13 ((there, is, no), (is, no, cultural))
14 ((are, customary, practices), (customary, practices, that))
15 ((that, destroys, the), (destroys, the, environment))
16 ((destroys, the, environment), (the, environment, there))
17 ((cutting, down, of), (down, of, trees))
18 ((but, at, the), (at, the, moment))
19 ((are, no, cultural), (no, cultural, practices))
20 ((is, nothing, i), (nothing, i, have))
21 ((there, is, nothing), (is, nothing, our))
22 ((us, to, protect), (to, protect, the))
23 ((on, how, to), (how, to, protect))
24 ((how, to, protect), (to, protect, the))
25 ((it, has, never), (has, never, happed))
26 ((has, never, happed), (never, happed, before))
27 ((never, happed, before), (happed, before, in))
28 ((happed, before, in), (before, in, his))
29 ((before, in, his), (in, his, life))
30 ((in, his, life), (his, life, time))
31 ((his, life, time), (life, time, there))
32 ((life, time, there), (time, there, is))
33 ((time, there, is), (there, is, need))
34 ((there, is, need), (is, need, to))
35 ((is, need, to), (need, to, harvest))
36 ((need, to, harvest), (to, harvest, trees))
37 ((to, harvest, trees), (harvest, trees, when))
38 ((harvest, trees, when), (trees, when, they))
39 ((trees, when, they), (when, they, have))
40 ((when, they, have), (they, have, fully))
41 ((they, have, fully), (have, fully, grown))
42 ((have, fully, grown), (fully, grown, the))
43 ((fully, grown, the), (grown, the, same))
44 ((grown, the, same), (the, same, applies))
45 ((the, same, applies), (same, applies, to))
46 ((same, applies, to), (applies, to, biodiversity))
47 ((applies, to, biodiversity), (to, biodiversity, there))
48 ((to, biodiversity, there), (biodiversity, there, are))
49 ((biodiversity, there, are), (there, are, no))
50 ((there, are, no), (are, no, customary))
51 ((are, no, customary), (no, customary, practices))
52 ((no, customary, practices), (customary, practices, that))
53 ((customary, practices, that), (practices, that, hinder))
54 ((practices, that, hinder), (that, hinder, sustainable))
55 ((that, hinder, sustainable), (hinder, sustainable, management))
56 ((hinder, sustainable, management), (sustainable, management, there))
57 ((sustainable, management, there), (management, there, are))
58 ((management, there, are), (there, are, crop))
59 ((there, are, crop), (are, crop, rotations))
60 ((are, crop, rotations), (crop, rotations, made))
61 ((crop, rotations, made), (rotations, made, on))
62 ((rotations, made, on), (made, on, the))
63 ((made, on, the), (on, the, land))
64 ((on, the, land), (the, land, when))
65 ((the, land, when), (land, when, cultivating))
66 ((land, when, cultivating), (when, cultivating, to))
Frequency
0 54
1 52
2 47
3 5
4 4
5 3
6 3
7 3
8 3
9 3
10 3
11 3
12 2
13 2
14 2
15 2
16 2
17 2
18 2
19 2
20 2
21 2
22 2
23 2
24 2
25 1
26 1
27 1
28 1
29 1
30 1
31 1
32 1
33 1
34 1
35 1
36 1
37 1
38 1
39 1
40 1
41 1
42 1
43 1
44 1
45 1
46 1
47 1
48 1
49 1
50 1
51 1
52 1
53 1
54 1
55 1
56 1
57 1
58 1
59 1
60 1
61 1
62 1
63 1
64 1
65 1
66 1
lemmatizer = WordNetLemmatizer()
CAC = df7['Cultural_Aspects_Considered_Reasons'].str.lower().str.cat(sep=' ')
CAC_words = nltk.tokenize.word_tokenize(CAC)
CAC_filtered_tokens = [word for word in CAC_words if len(CAC_words) >= 4]
CAC_lemmatized_words = [lemmatizer.lemmatize(word) for word in CAC_filtered_tokens]
CAC_token_counts = Counter(CAC_lemmatized_words)
CAC_columns = pd.DataFrame(CAC_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CAC_columns)
Word Frequency 0 the 73 1 not 52 2 is 51 3 we 47 4 there 45 5 nothing 39 6 do 36 7 to 23 8 are 23 9 it 17 10 project 15 11 that 13 12 of 13 13 cultural 12 14 have 11 15 can 11 16 included 11 17 they 11 18 no 10 19 in 10 20 a 10 21 because 10 22 traditional 8 23 aspect 7 24 follow 7 25 practice 7 26 and 6 27 be 6 28 if 6 29 our 6 30 for 5 31 from 5 32 include 5 33 considered 5 34 what 4 35 so 4 36 chief 4 37 give 4 38 or 4 39 answer 4 40 authority 4 41 at 3 42 time 3 43 land 3 44 only 3 45 government 3 46 u 3 47 when 3 48 law 3 49 place 3 50 would 3 51 well 3 52 customary 3 53 destroyed 3 54 done 3 55 but 3 56 people 3 57 change 3 58 community 3 59 history 2 60 need 2 61 come 2 62 with 2 63 say 2 64 , 2 65 tree 2 66 cut 2
CAC_bigrams_list = list(bigrams(CAC_filtered_tokens))
#print(bigrams_list)
CAC_bigram_counts = Counter(zip(CAC_bigrams_list, CAC_bigrams_list[1:]))
#print(bigram_counts)
CAC_bigrams = pd.DataFrame(CAC_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CAC_bigrams)
Word Frequency 0 ((there, is), (is, nothing)) 33 1 ((we, do), (do, not)) 26 2 ((is, nothing), (nothing, we)) 9 3 ((nothing, we), (we, do)) 9 4 ((is, nothing), (nothing, there)) 8 5 ((nothing, there), (there, is)) 7 6 ((do, not), (not, do)) 7 7 ((do, not), (not, include)) 5 8 ((they, are), (are, not)) 5 9 ((not, do), (do, it)) 5 10 ((the, cultural), (cultural, aspects)) 5 11 ((are, not), (not, included)) 4 12 ((there, are), (are, no)) 3 13 ((is, nothing), (nothing, the)) 3 14 ((the, traditional), (traditional, authority)) 3 15 ((cultural, aspects), (aspects, are)) 3 16 ((not, included), (included, we)) 3 17 ((included, we), (we, do)) 3 18 ((that, there), (there, is)) 2 19 ((we, only), (only, follow)) 2 20 ((we, have), (have, to)) 2 21 ((has, to), (to, be)) 2 22 ((is, nothing), (nothing, at)) 2 23 ((nothing, at), (at, the)) 2 24 ((at, the), (the, moment)) 2 25 ((we, follow), (follow, the)) 2 26 ((follow, the), (the, laws)) 2 27 ((the, project), (project, can)) 2 28 ((if, there), (there, are)) 2 29 ((are, no), (no, customary)) 2 30 ((no, customary), (customary, practices)) 2 31 ((if, it), (it, is)) 2 32 ((it, is), (is, not)) 2 33 ((not, work), (work, well)) 2 34 ((not, do), (do, that)) 2 35 ((is, nothing), (nothing, no)) 2 36 ((nothing, no), (no, answer)) 2 37 ((do, it), (it, we)) 2 38 ((do, not), (not, have)) 2 39 ((we, always), (always, follow)) 2 40 ((not, include), (include, there)) 2 41 ((include, there), (there, is)) 2 42 ((is, nothing), (nothing, they)) 2 43 ((can, not), (not, be)) 2 44 ((do, not), (not, the)) 2 45 ((are, considered), (considered, because)) 2 46 ((in, the), (the, community)) 2 47 ((is, nothing), (nothing, included)) 2 48 ((nothing, is), (is, considered)) 2 49 ((considered, there), (there, is)) 2 50 ((do, not), (not, not)) 2 51 ((do, not), (not, we)) 2 52 ((aspects, are), (are, not)) 2 53 ((there, is), (is, no)) 1 54 ((is, no), (no, cultural)) 1 55 ((no, cultural), (cultural, history)) 1 56 ((cultural, history), (history, in)) 1 57 ((history, in), (in, the)) 1 58 ((in, the), (the, area)) 1 59 ((the, area), (area, there)) 1 60 ((area, there), (there, is)) 1 61 ((there, is), (is, need)) 1 62 ((is, need), (need, to)) 1 63 ((need, to), (to, have)) 1 64 ((to, have), (have, power)) 1 65 ((have, power), (power, in)) 1 66 ((power, in), (in, what)) 1
CAC_trigrams_list = list(trigrams(CAC_filtered_tokens))
#print(bigrams_list)
CAC_trigram_counts = Counter(zip(CAC_trigrams_list, CAC_trigrams_list[1:]))
#print(bigram_counts)
CAC_trigrams = pd.DataFrame(CAC_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CAC_trigrams)
Word Frequency 0 ((there, is, nothing), (is, nothing, we)) 9 1 ((nothing, we, do), (we, do, not)) 9 2 ((is, nothing, we), (nothing, we, do)) 8 3 ((there, is, nothing), (is, nothing, there)) 7 4 ((is, nothing, there), (nothing, there, is)) 7 5 ((nothing, there, is), (there, is, nothing)) 7 6 ((we, do, not), (do, not, do)) 7 7 ((we, do, not), (do, not, include)) 5 8 ((do, not, do), (not, do, it)) 5 9 ((there, is, nothing), (is, nothing, the)) 3 10 ((the, cultural, aspects), (cultural, aspects, are)) 3 11 ((not, included, we), (included, we, do)) 3 12 ((included, we, do), (we, do, not)) 3 13 ((there, is, nothing), (is, nothing, at)) 2 14 ((is, nothing, at), (nothing, at, the)) 2 15 ((nothing, at, the), (at, the, moment)) 2 16 ((they, are, not), (are, not, included)) 2 17 ((if, there, are), (there, are, no)) 2 18 ((there, are, no), (are, no, customary)) 2 19 ((are, no, customary), (no, customary, practices)) 2 20 ((if, it, is), (it, is, not)) 2 21 ((do, not, do), (not, do, that)) 2 22 ((there, is, nothing), (is, nothing, no)) 2 23 ((is, nothing, no), (nothing, no, answer)) 2 24 ((not, do, it), (do, it, we)) 2 25 ((we, do, not), (do, not, have)) 2 26 ((do, not, include), (not, include, there)) 2 27 ((not, include, there), (include, there, is)) 2 28 ((include, there, is), (there, is, nothing)) 2 29 ((there, is, nothing), (is, nothing, they)) 2 30 ((we, do, not), (do, not, the)) 2 31 ((there, is, nothing), (is, nothing, included)) 2 32 ((considered, there, is), (there, is, nothing)) 2 33 ((we, do, not), (do, not, not)) 2 34 ((we, do, not), (do, not, we)) 2 35 ((cultural, aspects, are), (aspects, are, not)) 2 36 ((aspects, are, not), (are, not, included)) 2 37 ((are, not, included), (not, included, we)) 2 38 ((there, is, no), (is, no, cultural)) 1 39 ((is, no, cultural), (no, cultural, history)) 1 40 ((no, cultural, history), (cultural, history, in)) 1 41 ((cultural, history, in), (history, in, the)) 1 42 ((history, in, the), (in, the, area)) 1 43 ((in, the, area), (the, area, there)) 1 44 ((the, area, there), (area, there, is)) 1 45 ((area, there, is), (there, is, need)) 1 46 ((there, is, need), (is, need, to)) 1 47 ((is, need, to), (need, to, have)) 1 48 ((need, to, have), (to, have, power)) 1 49 ((to, have, power), (have, power, in)) 1 50 ((have, power, in), (power, in, what)) 1 51 ((power, in, what), (in, what, is)) 1 52 ((in, what, is), (what, is, being)) 1 53 ((what, is, being), (is, being, formulated)) 1 54 ((is, being, formulated), (being, formulated, so)) 1 55 ((being, formulated, so), (formulated, so, that)) 1 56 ((formulated, so, that), (so, that, there)) 1 57 ((so, that, there), (that, there, is)) 1 58 ((that, there, is), (there, is, ownership)) 1 59 ((there, is, ownership), (is, ownership, there)) 1 60 ((is, ownership, there), (ownership, there, are)) 1 61 ((ownership, there, are), (there, are, no)) 1 62 ((there, are, no), (are, no, cultural)) 1 63 ((are, no, cultural), (no, cultural, aspects)) 1 64 ((no, cultural, aspects), (cultural, aspects, at)) 1 65 ((cultural, aspects, at), (aspects, at, the)) 1 66 ((aspects, at, the), (at, the, time)) 1
lemmatizer = WordNetLemmatizer()
CoN = df7['Connection_Nature_Reasons'].str.lower().str.cat(sep=' ')
CoN_words = nltk.tokenize.word_tokenize(CoN)
CoN_filtered_tokens = [word for word in CoN_words if len(CoN_words) >= 4]
CoN_lemmatized_words = [lemmatizer.lemmatize(word) for word in CoN_filtered_tokens]
CoN_token_counts = Counter(CoN_lemmatized_words)
CoN_columns = pd.DataFrame(CoN_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
#print(CoN_columns)
CoN_bigrams_list = list(bigrams(CoN_filtered_tokens))
#print(bigrams_list)
CoN_bigram_counts = Counter(zip(CoN_bigrams_list, CoN_bigrams_list[1:]))
#print(bigram_counts)
CoN_bigrams = pd.DataFrame(CoN_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CoN_bigrams)
Word Frequency 0 ((source, of), (of, income)) 11 1 ((a, source), (source, of)) 9 2 ((as, well), (well, as)) 5 3 ((they, are), (are, a)) 5 4 ((are, a), (a, source)) 5 5 ((of, income), (income, they)) 3 6 ((the, natural), (natural, resources)) 3 7 ((the, animals), (animals, can)) 2 8 ((animals, can), (can, be)) 2 9 ((can, be), (be, used)) 2 10 ((the, future), (future, generation)) 2 11 ((income, they), (they, are)) 2 12 ((future, generation), (generation, to)) 2 13 ((them, source), (source, of)) 2 14 ((it, is), (is, a)) 2 15 ((is, a), (a, source)) 2 16 ((there, is), (is, need)) 2 17 ((is, need), (need, to)) 2 18 ((are, the), (the, ones)) 2 19 ((look, after), (after, them)) 2 20 ((source, of), (of, livelihood)) 2 21 ((of, income), (income, the)) 2 22 ((help, us), (us, in)) 2 23 ((are, source), (source, of)) 2 24 ((source, of), (of, food)) 2 25 ((of, food), (food, as)) 2 26 ((food, as), (as, well)) 2 27 ((a, lot), (lot, of)) 2 28 ((thats, where), (where, we)) 2 29 ((where, we), (we, get)) 2 30 ((be, used), (used, by)) 1 31 ((used, by), (by, the)) 1 32 ((by, the), (the, future)) 1 33 ((future, generation), (generation, as)) 1 34 ((generation, as), (as, well)) 1 35 ((as, well), (well, how)) 1 36 ((well, how), (how, the)) 1 37 ((how, the), (the, animals)) 1 38 ((be, used), (used, as)) 1 39 ((used, as), (as, an)) 1 40 ((as, an), (an, example)) 1 41 ((an, example), (example, on)) 1 42 ((example, on), (on, how)) 1 43 ((on, how), (how, people)) 1 44 ((how, people), (people, should)) 1 45 ((people, should), (should, lead)) 1 46 ((should, lead), (lead, their)) 1 47 ((lead, their), (their, life)) 1 48 ((their, life), (life, through)) 1 49 ((life, through), (through, experiments)) 1 50 ((through, experiments), (experiments, he)) 1 51 ((experiments, he), (he, gains)) 1 52 ((he, gains), (gains, knowledge)) 1 53 ((gains, knowledge), (knowledge, on)) 1 54 ((knowledge, on), (on, the)) 1 55 ((on, the), (the, management)) 1 56 ((the, management), (management, of)) 1 57 ((management, of), (of, forest)) 1 58 ((of, forest), (forest, to)) 1 59 ((forest, to), (to, use)) 1 60 ((to, use), (use, some)) 1 61 ((use, some), (some, of)) 1 62 ((some, of), (of, them)) 1 63 ((of, them), (them, for)) 1 64 ((them, for), (for, income)) 1 65 ((for, income), (income, they)) 1 66 ((they, are), (are, important)) 1
CoN_trigrams_list = list(trigrams(CoN_filtered_tokens))
#print(bigrams_list)
CoN_trigram_counts = Counter(zip(CoN_trigrams_list, CoN_trigrams_list[1:]))
#print(bigram_counts)
CoN_trigrams = pd.DataFrame(CoN_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CoN_trigrams)
Word Frequency 0 ((a, source, of), (source, of, income)) 6 1 ((they, are, a), (are, a, source)) 5 2 ((are, a, source), (a, source, of)) 5 3 ((source, of, income), (of, income, they)) 3 4 ((the, animals, can), (animals, can, be)) 2 5 ((animals, can, be), (can, be, used)) 2 6 ((them, source, of), (source, of, income)) 2 7 ((it, is, a), (is, a, source)) 2 8 ((is, a, source), (a, source, of)) 2 9 ((there, is, need), (is, need, to)) 2 10 ((a, source, of), (source, of, livelihood)) 2 11 ((source, of, income), (of, income, the)) 2 12 ((source, of, food), (of, food, as)) 2 13 ((of, food, as), (food, as, well)) 2 14 ((food, as, well), (as, well, as)) 2 15 ((thats, where, we), (where, we, get)) 2 16 ((can, be, used), (be, used, by)) 1 17 ((be, used, by), (used, by, the)) 1 18 ((used, by, the), (by, the, future)) 1 19 ((by, the, future), (the, future, generation)) 1 20 ((the, future, generation), (future, generation, as)) 1 21 ((future, generation, as), (generation, as, well)) 1 22 ((generation, as, well), (as, well, how)) 1 23 ((as, well, how), (well, how, the)) 1 24 ((well, how, the), (how, the, animals)) 1 25 ((how, the, animals), (the, animals, can)) 1 26 ((can, be, used), (be, used, as)) 1 27 ((be, used, as), (used, as, an)) 1 28 ((used, as, an), (as, an, example)) 1 29 ((as, an, example), (an, example, on)) 1 30 ((an, example, on), (example, on, how)) 1 31 ((example, on, how), (on, how, people)) 1 32 ((on, how, people), (how, people, should)) 1 33 ((how, people, should), (people, should, lead)) 1 34 ((people, should, lead), (should, lead, their)) 1 35 ((should, lead, their), (lead, their, life)) 1 36 ((lead, their, life), (their, life, through)) 1 37 ((their, life, through), (life, through, experiments)) 1 38 ((life, through, experiments), (through, experiments, he)) 1 39 ((through, experiments, he), (experiments, he, gains)) 1 40 ((experiments, he, gains), (he, gains, knowledge)) 1 41 ((he, gains, knowledge), (gains, knowledge, on)) 1 42 ((gains, knowledge, on), (knowledge, on, the)) 1 43 ((knowledge, on, the), (on, the, management)) 1 44 ((on, the, management), (the, management, of)) 1 45 ((the, management, of), (management, of, forest)) 1 46 ((management, of, forest), (of, forest, to)) 1 47 ((of, forest, to), (forest, to, use)) 1 48 ((forest, to, use), (to, use, some)) 1 49 ((to, use, some), (use, some, of)) 1 50 ((use, some, of), (some, of, them)) 1 51 ((some, of, them), (of, them, for)) 1 52 ((of, them, for), (them, for, income)) 1 53 ((them, for, income), (for, income, they)) 1 54 ((for, income, they), (income, they, are)) 1 55 ((income, they, are), (they, are, important)) 1 56 ((they, are, important), (are, important, in)) 1 57 ((are, important, in), (important, in, our)) 1 58 ((important, in, our), (in, our, lifes)) 1 59 ((in, our, lifes), (our, lifes, like)) 1 60 ((our, lifes, like), (lifes, like, trees)) 1 61 ((lifes, like, trees), (like, trees, they)) 1 62 ((like, trees, they), (trees, they, provide)) 1 63 ((trees, they, provide), (they, provide, home)) 1 64 ((they, provide, home), (provide, home, for)) 1 65 ((provide, home, for), (home, for, animals)) 1 66 ((home, for, animals), (for, animals, as)) 1
lemmatizer = WordNetLemmatizer()
CLE = df7['Change_Livelihood_Easy_Reasons'].str.lower().str.cat(sep=' ')
CLE_words = nltk.tokenize.word_tokenize(CLE)
CLE_filtered_tokens = [word for word in CLE_words if len(CLE_words) >= 4]
CLE_lemmatized_words = [lemmatizer.lemmatize(word) for word in CLE_filtered_tokens]
CLE_token_counts = Counter(CLE_lemmatized_words)
CLE_columns = pd.DataFrame(CLE_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
#print(CLE_columns)
CLE_bigrams_list = list(bigrams(CLE_filtered_tokens))
#print(bigrams_list)
CLE_bigram_counts = Counter(zip(CLE_bigrams_list, CLE_bigrams_list[1:]))
#print(bigram_counts)
CLE_bigrams = pd.DataFrame(CLE_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CLE_bigrams)
Word Frequency 0 ((as, long), (long, as)) 14 1 ((it, is), (is, a)) 7 2 ((to, change), (change, because)) 6 3 ((long, as), (as, there)) 6 4 ((need, to), (to, change)) 5 5 ((we, can), (can, change)) 5 6 ((we, depend), (depend, on)) 5 7 ((we, do), (do, not)) 5 8 ((there, is), (is, need)) 4 9 ((the, natural), (natural, resources)) 4 10 ((as, well), (well, as)) 4 11 ((it, is), (is, easier)) 4 12 ((if, there), (there, are)) 4 13 ((as, there), (there, is)) 4 14 ((do, not), (not, have)) 4 15 ((is, a), (a, challenge)) 4 16 ((of, climate), (climate, change)) 4 17 ((thus, it), (it, is)) 4 18 ((can, not), (not, change)) 4 19 ((if, there), (there, is)) 3 20 ((can, change), (change, the)) 3 21 ((is, need), (need, to)) 3 22 ((long, as), (as, i)) 3 23 ((as, i), (i, have)) 3 24 ((source, of), (of, income)) 3 25 ((that, we), (we, do)) 3 26 ((is, not), (not, easy)) 3 27 ((i, do), (do, not)) 3 28 ((for, our), (our, livelihoods)) 3 29 ((it, can), (can, be)) 3 30 ((long, as), (as, we)) 3 31 ((it, is), (is, easy)) 3 32 ((difficult, to), (to, change)) 3 33 ((change, it), (it, is)) 3 34 ((to, change), (change, to)) 2 35 ((change, is), (is, easier)) 2 36 ((is, easier), (easier, because)) 2 37 ((to, change), (change, if)) 2 38 ((change, if), (if, there)) 2 39 ((can, be), (be, changed)) 2 40 ((from, natural), (natural, resources)) 2 41 ((not, depend), (depend, on)) 2 42 ((depend, on), (on, the)) 2 43 ((on, the), (the, natural)) 2 44 ((have, money), (money, for)) 2 45 ((other, livelihoods), (livelihoods, we)) 2 46 ((a, source), (source, of)) 2 47 ((the, world), (world, is)) 2 48 ((we, need), (need, to)) 2 49 ((there, are), (are, some)) 2 50 ((thus, changing), (changing, is)) 2 51 ((do, not), (not, use)) 2 52 ((use, natural), (natural, resources)) 2 53 ((can, not), (not, be)) 2 54 ((so, that), (that, the)) 2 55 ((i, have), (have, some)) 2 56 ((change, because), (because, of)) 2 57 ((to, climate), (climate, change)) 2 58 ((change, no), (no, answer)) 2 59 ((the, livelihoods), (livelihoods, we)) 2 60 ((depend, on), (on, them)) 2 61 ((because, the), (the, livelihood)) 2 62 ((resources, we), (we, can)) 2 63 ((that, we), (we, can)) 2 64 ((we, can), (can, do)) 2 65 ((so, that), (that, we)) 2 66 ((us, act), (act, in)) 2
CLE_trigrams_list = list(trigrams(CLE_filtered_tokens))
#print(bigrams_list)
CLE_trigram_counts = Counter(zip(CLE_trigrams_list, CLE_trigrams_list[1:]))
#print(bigram_counts)
CLE_trigrams = pd.DataFrame(CLE_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CLE_trigrams)
Word Frequency 0 ((as, long, as), (long, as, there)) 6 1 ((long, as, there), (as, there, is)) 4 2 ((it, is, a), (is, a, challenge)) 4 3 ((we, can, change), (can, change, the)) 3 4 ((there, is, need), (is, need, to)) 3 5 ((is, need, to), (need, to, change)) 3 6 ((as, long, as), (long, as, i)) 3 7 ((long, as, i), (as, i, have)) 3 8 ((we, do, not), (do, not, have)) 3 9 ((as, long, as), (long, as, we)) 3 10 ((need, to, change), (to, change, to)) 2 11 ((change, is, easier), (is, easier, because)) 2 12 ((to, change, if), (change, if, there)) 2 13 ((change, if, there), (if, there, is)) 2 14 ((on, the, natural), (the, natural, resources)) 2 15 ((a, source, of), (source, of, income)) 2 16 ((to, change, because), (change, because, of)) 2 17 ((resources, we, can), (we, can, change)) 2 18 ((do, not, have), (not, have, money)) 2 19 ((is, a, challenge), (a, challenge, to)) 2 20 ((a, challenge, to), (challenge, to, change)) 2 21 ((we, only, depend), (only, depend, on)) 2 22 ((livelihood, as, long), (as, long, as)) 2 23 ((long, as, we), (as, we, have)) 2 24 ((thus, i, can), (i, can, not)) 2 25 ((it, as, long), (as, long, as)) 2 26 ((long, as, there), (as, there, are)) 2 27 ((thus, it, is), (it, is, easy)) 2 28 ((it, is, easy), (is, easy, to)) 2 29 ((is, easy, to), (easy, to, change)) 2 30 ((it, is, not), (is, not, easy)) 2 31 ((is, not, easy), (not, easy, but)) 2 32 ((not, easy, but), (easy, but, it)) 2 33 ((easy, but, it), (but, it, is)) 2 34 ((because, of, climate), (of, climate, change)) 2 35 ((thus, it, is), (it, is, a)) 2 36 ((change, as, long), (as, long, as)) 2 37 ((given, to, us), (to, us, by)) 2 38 ((it, is, easier), (is, easier, as)) 2 39 ((is, easier, as), (easier, as, long)) 2 40 ((easier, as, long), (as, long, as)) 2 41 ((as, long, as), (long, as, you)) 2 42 ((if, the, ses), (the, ses, are)) 1 43 ((the, ses, are), (ses, are, taken)) 1 44 ((ses, are, taken), (are, taken, care)) 1 45 ((are, taken, care), (taken, care, of)) 1 46 ((taken, care, of), (care, of, they)) 1 47 ((care, of, they), (of, they, can)) 1 48 ((of, they, can), (they, can, increase)) 1 49 ((they, can, increase), (can, increase, in)) 1 50 ((can, increase, in), (increase, in, number)) 1 51 ((increase, in, number), (in, number, and)) 1 52 ((in, number, and), (number, and, bring)) 1 53 ((number, and, bring), (and, bring, income)) 1 54 ((and, bring, income), (bring, income, the)) 1 55 ((bring, income, the), (income, the, ses)) 1 56 ((income, the, ses), (the, ses, can)) 1 57 ((the, ses, can), (ses, can, be)) 1 58 ((ses, can, be), (can, be, depleted)) 1 59 ((can, be, depleted), (be, depleted, thus)) 1 60 ((be, depleted, thus), (depleted, thus, there)) 1 61 ((depleted, thus, there), (thus, there, need)) 1 62 ((thus, there, need), (there, need, to)) 1 63 ((there, need, to), (need, to, change)) 1 64 ((to, change, to), (change, to, agriculture)) 1 65 ((change, to, agriculture), (to, agriculture, like)) 1 66 ((to, agriculture, like), (agriculture, like, goat)) 1
lemmatizer = WordNetLemmatizer()
CLT = df7['Contributor_Landscape_Transformation_Reasons'].str.lower().str.cat(sep=' ')
CLT_words = nltk.tokenize.word_tokenize(CLT)
CLT_filtered_tokens = [word for word in CLT_words if len(CLT_words) >= 4]
CLT_lemmatized_words = [lemmatizer.lemmatize(word) for word in CLT_filtered_tokens]
CLT_token_counts = Counter(CLT_lemmatized_words)
CLT_columns = pd.DataFrame(CLT_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
#print(CLT_columns)
CLT_bigrams_list = list(bigrams(CLT_filtered_tokens))
#print(bigrams_list)
CLT_bigram_counts = Counter(zip(CLT_bigrams_list, CLT_bigrams_list[1:]))
#print(bigram_counts)
CLT_bigrams = pd.DataFrame(CLT_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CLT_bigrams)
Word Frequency 0 ((as, well), (well, as)) 8 1 ((a, lot), (lot, of)) 6 2 ((source, of), (of, income)) 5 3 ((there, is), (is, no)) 4 4 ((is, the), (the, source)) 4 5 ((the, source), (source, of)) 4 6 ((the, cutting), (cutting, down)) 4 7 ((a, huge), (huge, area)) 4 8 ((trees, for), (for, charcoal)) 4 9 ((lot, of), (of, things)) 4 10 ((the, fire), (fire, destroys)) 4 11 ((it, destroys), (destroys, the)) 4 12 ((when, the), (the, trees)) 4 13 ((the, trees), (trees, are)) 4 14 ((trees, are), (are, cut)) 4 15 ((than, the), (the, others)) 3 16 ((the, burning), (burning, of)) 3 17 ((it, is), (is, a)) 3 18 ((it, is), (is, the)) 3 19 ((cutting, down), (down, of)) 3 20 ((that, is), (is, the)) 3 21 ((well, as), (as, the)) 3 22 ((the, trees), (trees, get)) 3 23 ((cutting, trees), (trees, for)) 3 24 ((are, a), (a, lot)) 3 25 ((destroys, a), (a, lot)) 3 26 ((fire, destroys), (destroys, the)) 3 27 ((destroys, the), (the, habitat)) 3 28 ((the, trees), (trees, have)) 2 29 ((requires, a), (a, huge)) 2 30 ((a, huge), (huge, land)) 2 31 ((the, agriculture), (agriculture, activities)) 2 32 ((that, destroys), (destroys, the)) 2 33 ((it, is), (is, for)) 2 34 ((for, agriculture), (agriculture, purposes)) 2 35 ((brings, income), (income, the)) 2 36 ((the, land), (land, is)) 2 37 ((the, trees), (trees, that)) 2 38 ((is, a), (a, source)) 2 39 ((a, source), (source, of)) 2 40 ((source, of), (of, food)) 2 41 ((in, the), (the, area)) 2 42 ((of, income), (income, some)) 2 43 ((the, soil), (soil, the)) 2 44 ((change, the), (the, landscape)) 2 45 ((the, landscape), (landscape, the)) 2 46 ((huge, chucks), (chucks, of)) 2 47 ((of, land), (land, for)) 2 48 ((for, a), (a, livelihood)) 2 49 ((a, livelihood), (livelihood, the)) 2 50 ((the, animals), (animals, will)) 2 51 ((animals, will), (will, not)) 2 52 ((will, not), (not, have)) 2 53 ((cutting, tree), (tree, for)) 2 54 ((tree, for), (for, charcoal)) 2 55 ((it, is), (is, difficult)) 2 56 ((is, difficult), (difficult, for)) 2 57 ((the, trees), (trees, dry)) 2 58 ((when, they), (they, are)) 2 59 ((they, are), (are, burnt)) 2 60 ((the, fertility), (fertility, of)) 2 61 ((fertility, of), (of, the)) 2 62 ((of, the), (the, soil)) 2 63 ((the, cutting), (cutting, of)) 2 64 ((cutting, of), (of, trees)) 2 65 ((of, trees), (trees, for)) 2 66 ((burning, destroys), (destroys, the)) 2
CLT_trigrams_list = list(trigrams(CLT_filtered_tokens))
#print(bigrams_list)
CLT_trigram_counts = Counter(zip(CLT_trigrams_list, CLT_trigrams_list[1:]))
#print(bigram_counts)
CLT_trigrams = pd.DataFrame(CLT_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(CLT_trigrams)
Word Frequency 0 ((is, the, source), (the, source, of)) 4 1 ((the, cutting, down), (cutting, down, of)) 3 2 ((as, well, as), (well, as, the)) 3 3 ((a, lot, of), (lot, of, things)) 3 4 ((destroys, a, lot), (a, lot, of)) 3 5 ((when, the, trees), (the, trees, are)) 3 6 ((the, trees, are), (trees, are, cut)) 3 7 ((requires, a, huge), (a, huge, land)) 2 8 ((it, is, a), (is, a, source)) 2 9 ((is, a, source), (a, source, of)) 2 10 ((the, source, of), (source, of, income)) 2 11 ((source, of, income), (of, income, some)) 2 12 ((that, is, the), (is, the, source)) 2 13 ((for, a, livelihood), (a, livelihood, the)) 2 14 ((the, animals, will), (animals, will, not)) 2 15 ((animals, will, not), (will, not, have)) 2 16 ((cutting, tree, for), (tree, for, charcoal)) 2 17 ((it, is, difficult), (is, difficult, for)) 2 18 ((when, they, are), (they, are, burnt)) 2 19 ((the, fertility, of), (fertility, of, the)) 2 20 ((fertility, of, the), (of, the, soil)) 2 21 ((the, cutting, of), (cutting, of, trees)) 2 22 ((cutting, of, trees), (of, trees, for)) 2 23 ((of, trees, for), (trees, for, charcoal)) 2 24 ((cutting, trees, for), (trees, for, charcoal)) 2 25 ((there, are, a), (are, a, lot)) 2 26 ((are, a, lot), (a, lot, of)) 2 27 ((the, burning, of), (burning, of, the)) 2 28 ((the, fire, destroys), (fire, destroys, the)) 2 29 ((trees, are, cut), (are, cut, for)) 2 30 ((are, cut, for), (cut, for, charcoal)) 2 31 ((a, huge, area), (huge, area, and)) 2 32 ((huge, area, and), (area, and, kills)) 2 33 ((the, rainfall, will), (rainfall, will, reduce)) 2 34 ((to, protect, the), (protect, the, environment)) 2 35 ((destroys, the, habitat), (the, habitat, of)) 2 36 ((it, destroys, everything), (destroys, everything, on)) 2 37 ((destroys, everything, on), (everything, on, its)) 2 38 ((the, trees, have), (trees, have, been)) 1 39 ((trees, have, been), (have, been, depleted)) 1 40 ((have, been, depleted), (been, depleted, and)) 1 41 ((been, depleted, and), (depleted, and, it)) 1 42 ((depleted, and, it), (and, it, has)) 1 43 ((and, it, has), (it, has, caused)) 1 44 ((it, has, caused), (has, caused, reduced)) 1 45 ((has, caused, reduced), (caused, reduced, rainfall)) 1 46 ((caused, reduced, rainfall), (reduced, rainfall, because)) 1 47 ((reduced, rainfall, because), (rainfall, because, i)) 1 48 ((rainfall, because, i), (because, i, have)) 1 49 ((because, i, have), (i, have, to)) 1 50 ((i, have, to), (have, to, clear)) 1 51 ((have, to, clear), (to, clear, the)) 1 52 ((to, clear, the), (clear, the, land)) 1 53 ((clear, the, land), (the, land, for)) 1 54 ((the, land, for), (land, for, agriculture)) 1 55 ((land, for, agriculture), (for, agriculture, to)) 1 56 ((for, agriculture, to), (agriculture, to, plant)) 1 57 ((agriculture, to, plant), (to, plant, crops)) 1 58 ((to, plant, crops), (plant, crops, hence)) 1 59 ((plant, crops, hence), (crops, hence, the)) 1 60 ((crops, hence, the), (hence, the, landscape)) 1 61 ((hence, the, landscape), (the, landscape, change)) 1 62 ((the, landscape, change), (landscape, change, there)) 1 63 ((landscape, change, there), (change, there, would)) 1 64 ((change, there, would), (there, would, a)) 1 65 ((there, would, a), (would, a, reduction)) 1 66 ((would, a, reduction), (a, reduction, in)) 1
lemmatizer = WordNetLemmatizer()
LDL = df7['Landscape_Depeneded_Livelihood_Reasons'].str.lower().str.cat(sep=' ')
LDL_words = nltk.tokenize.word_tokenize(LDL)
LDL_filtered_tokens = [word for word in LDL_words if len(LDL_words) >= 4]
LDL_lemmatized_words = [lemmatizer.lemmatize(word) for word in LDL_filtered_tokens]
LDL_token_counts = Counter(LDL_lemmatized_words)
LDL_columns = pd.DataFrame(LDL_token_counts.most_common(67),
columns = ['Word', 'Frequency'])
#print(LDL_columns)
LDL_bigrams_list = list(bigrams(LDL_filtered_tokens))
#print(bigrams_list)
LDL_bigram_counts = Counter(zip(LDL_bigrams_list, LDL_bigrams_list[1:]))
#print(bigram_counts)
LDL_bigrams = pd.DataFrame(LDL_bigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(LDL_bigrams)
Word Frequency 0 ((where, we), (we, get)) 17 1 ((thats, where), (where, we)) 16 2 ((we, get), (get, food)) 16 3 ((a, source), (source, of)) 12 4 ((water, is), (is, life)) 11 5 ((thats, were), (were, we)) 10 6 ((were, we), (we, get)) 9 7 ((we, are), (are, farmers)) 9 8 ((as, well), (well, as)) 7 9 ((get, food), (food, crops)) 7 10 ((source, of), (of, income)) 6 11 ((is, a), (a, source)) 6 12 ((we, get), (get, our)) 6 13 ((food, crops), (crops, thats)) 6 14 ((our, food), (food, crops)) 6 15 ((is, the), (the, source)) 5 16 ((the, source), (source, of)) 5 17 ((source, of), (of, food)) 5 18 ((of, food), (food, crops)) 5 19 ((that, is), (is, where)) 5 20 ((is, where), (where, we)) 5 21 ((crops, thats), (thats, where)) 5 22 ((source, of), (of, livelihood)) 4 23 ((it, is), (is, where)) 4 24 ((food, crops), (crops, and)) 4 25 ((crops, and), (and, income)) 4 26 ((get, our), (our, food)) 4 27 ((are, farmers), (farmers, thats)) 4 28 ((it, is), (is, a)) 3 29 ((is, life), (life, water)) 3 30 ((life, water), (water, is)) 3 31 ((source, of), (of, life)) 3 32 ((for, our), (our, livelihoods)) 3 33 ((food, crops), (crops, come)) 3 34 ((crops, come), (come, from)) 3 35 ((were, we), (we, grow)) 3 36 ((is, life), (life, and)) 3 37 ((crops, for), (for, our)) 3 38 ((comes, from), (from, thats)) 3 39 ((in, the), (the, forest)) 3 40 ((where, we), (we, cultivate)) 3 41 ((we, depend), (depend, on)) 3 42 ((are, farmers), (farmers, by)) 3 43 ((farmers, by), (by, nature)) 3 44 ((crops, thats), (thats, were)) 3 45 ((from, thats), (thats, where)) 3 46 ((get, food), (food, for)) 3 47 ((that, is), (is, were)) 3 48 ((food, crops), (crops, we)) 3 49 ((farmers, thats), (thats, where)) 3 50 ((for, a), (a, livelihood)) 2 51 ((a, livelihood), (livelihood, the)) 2 52 ((there, is), (is, no)) 2 53 ((that, is), (is, a)) 2 54 ((food, for), (for, consumption)) 2 55 ((thats, a), (a, source)) 2 56 ((income, comes), (comes, from)) 2 57 ((for, a), (a, living)) 2 58 ((a, living), (living, it)) 2 59 ((it, helps), (helps, us)) 2 60 ((from, farming), (farming, we)) 2 61 ((we, obtain), (obtain, food)) 2 62 ((where, our), (our, food)) 2 63 ((if, there), (there, is)) 2 64 ((as, a), (a, source)) 2 65 ((they, are), (are, a)) 2 66 ((are, a), (a, source)) 2
LDL_trigrams_list = list(trigrams(LDL_filtered_tokens))
#print(bigrams_list)
LDL_trigram_counts = Counter(zip(LDL_trigrams_list, LDL_trigrams_list[1:]))
#print(bigram_counts)
LDL_trigrams = pd.DataFrame(LDL_trigram_counts.most_common(67),
columns = ['Word', 'Frequency'])
print(LDL_trigrams)
Word Frequency 0 ((thats, where, we), (where, we, get)) 14 1 ((where, we, get), (we, get, food)) 11 2 ((we, get, food), (get, food, crops)) 7 3 ((is, a, source), (a, source, of)) 6 4 ((thats, were, we), (were, we, get)) 6 5 ((is, the, source), (the, source, of)) 5 6 ((source, of, food), (of, food, crops)) 5 7 ((were, we, get), (we, get, food)) 4 8 ((where, we, get), (we, get, our)) 4 9 ((we, get, our), (get, our, food)) 4 10 ((food, crops, thats), (crops, thats, where)) 4 11 ((crops, thats, where), (thats, where, we)) 4 12 ((we, are, farmers), (are, farmers, thats)) 4 13 ((a, source, of), (source, of, income)) 3 14 ((it, is, a), (is, a, source)) 3 15 ((water, is, life), (is, life, water)) 3 16 ((is, life, water), (life, water, is)) 3 17 ((a, source, of), (source, of, food)) 3 18 ((food, crops, and), (crops, and, income)) 3 19 ((food, crops, come), (crops, come, from)) 3 20 ((thats, were, we), (were, we, grow)) 3 21 ((water, is, life), (is, life, and)) 3 22 ((that, is, where), (is, where, we)) 3 23 ((we, are, farmers), (are, farmers, by)) 3 24 ((are, farmers, by), (farmers, by, nature)) 3 25 ((we, get, food), (get, food, for)) 3 26 ((get, our, food), (our, food, crops)) 3 27 ((for, a, livelihood), (a, livelihood, the)) 2 28 ((the, source, of), (source, of, income)) 2 29 ((that, is, a), (is, a, source)) 2 30 ((thats, a, source), (a, source, of)) 2 31 ((for, a, living), (a, living, it)) 2 32 ((as, a, source), (a, source, of)) 2 33 ((they, are, a), (are, a, source)) 2 34 ((are, a, source), (a, source, of)) 2 35 ((source, of, traditional), (of, traditional, medicine)) 2 36 ((crops, come, from), (come, from, thats)) 2 37 ((as, well, as), (well, as, income)) 2 38 ((life, and, it), (and, it, is)) 2 39 ((were, we, grow), (we, grow, crops)) 2 40 ((life, water, is), (water, is, life)) 2 41 ((as, well, as), (well, as, the)) 2 42 ((is, where, we), (where, we, farm)) 2 43 ((that, where, we), (where, we, get)) 2 44 ((is, where, we), (where, we, cultivate)) 2 45 ((we, depend, on), (depend, on, the)) 2 46 ((food, crops, thats), (crops, thats, were)) 2 47 ((comes, from, thats), (from, thats, where)) 2 48 ((from, thats, where), (thats, where, we)) 2 49 ((get, food, for), (food, for, eating)) 2 50 ((that, is, were), (is, were, we)) 2 51 ((is, were, we), (were, we, get)) 2 52 ((animals, graze, in), (graze, in, the)) 2 53 ((are, farmers, thats), (farmers, thats, were)) 2 54 ((were, we, get), (we, get, our)) 2 55 ((our, food, crops), (food, crops, thats)) 2 56 ((get, food, crops), (food, crops, and)) 2 57 ((crops, thats, were), (thats, were, we)) 2 58 ((get, food, crops), (food, crops, we)) 2 59 ((food, crops, we), (crops, we, are)) 2 60 ((crops, we, are), (we, are, farmers)) 2 61 ((are, farmers, thats), (farmers, thats, where)) 2 62 ((farmers, thats, where), (thats, where, we)) 2 63 ((from, we, are), (we, are, farmers)) 2 64 ((he, cultivates, a), (cultivates, a, large)) 1 65 ((cultivates, a, large), (a, large, area)) 1 66 ((a, large, area), (large, area, of)) 1
11. Specific Variables¶
11.1 Main Project and Cultural Practices¶
The responses are grouped in line with the main projects
There is also the grouping of responses of those that agreed and storngly agreed in one dataframe as well as those whose responses were strongly disagreed and disagreed
The text responses in form of reasons are tokenized and lammentized
CPH_grouped = df2.groupby('Name_Main_Project')['Cultural_Practices_Hinder'].value_counts(dropna=False)
CPH_grouped
Name_Main_Project Cultural_Practices_Hinder
EbA_CENTRAL_MUCHINGA_LUAPULA Strongly_Agree_Likert 2
Strongly_Disagree_Likert 1
Ecosystem Conservation_NORTH_WESTERN Strongly_Disagree_Likert 5
Strongly_Agree_Likert 2
NaN 2
Agree_Likert 1
Disagree_Likert 1
PIN_WESTERN Strongly_Disagree_Likert 5
NaN 1
SCRALA_SOUTHERN_WESTERN_NORTHEN Disagree_Likert 11
Strongly_Disagree_Likert 9
Agree_Likert 5
Strongly_Agree_Likert 2
Undecided_Likert 2
NaN 1
SCReBS_WESTERN Strongly_Disagree_Likert 14
Agree_Likert 1
SCRiKA_LS Strongly_Disagree_Likert 20
Disagree_Likert 6
NaN 6
Agree_Likert 5
Strongly_Agree_Likert 5
Undecided_Likert 2
TRALARD_LNM Strongly_Disagree_Likert 16
Strongly_Agree_Likert 10
Agree_Likert 8
Disagree_Likert 4
NaN 2
Name: count, dtype: int64
CPH_grouped1 = pd.DataFrame(CPH_grouped)
CPH_grouped1
| count | ||
|---|---|---|
| Name_Main_Project | Cultural_Practices_Hinder | |
| EbA_CENTRAL_MUCHINGA_LUAPULA | Strongly_Agree_Likert | 2 |
| Strongly_Disagree_Likert | 1 | |
| Ecosystem Conservation_NORTH_WESTERN | Strongly_Disagree_Likert | 5 |
| Strongly_Agree_Likert | 2 | |
| NaN | 2 | |
| Agree_Likert | 1 | |
| Disagree_Likert | 1 | |
| PIN_WESTERN | Strongly_Disagree_Likert | 5 |
| NaN | 1 | |
| SCRALA_SOUTHERN_WESTERN_NORTHEN | Disagree_Likert | 11 |
| Strongly_Disagree_Likert | 9 | |
| Agree_Likert | 5 | |
| Strongly_Agree_Likert | 2 | |
| Undecided_Likert | 2 | |
| NaN | 1 | |
| SCReBS_WESTERN | Strongly_Disagree_Likert | 14 |
| Agree_Likert | 1 | |
| SCRiKA_LS | Strongly_Disagree_Likert | 20 |
| Disagree_Likert | 6 | |
| NaN | 6 | |
| Agree_Likert | 5 | |
| Strongly_Agree_Likert | 5 | |
| Undecided_Likert | 2 | |
| TRALARD_LNM | Strongly_Disagree_Likert | 16 |
| Strongly_Agree_Likert | 10 | |
| Agree_Likert | 8 | |
| Disagree_Likert | 4 | |
| NaN | 2 |
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert", "NaN"]
ax = sns.barplot(data = CPH_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Practices_Hinder", hue_order=hue_order, legend=True)
ax.set_title("Figure 13: Number of Responses on cultural Practices Hindering Sustainable Management of SES in the Main Projects", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
agreement_levels = ["Agree_Likert", "Strongly_Agree_Likert"]
CPH_R = df2[df2["Cultural_Practices_Hinder"].isin(agreement_levels)]
CPH_R1 = CPH_R.drop(CPH_R.columns[[0,1,2,3,4,6,7,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CPH_R1grouped = CPH_R1.groupby('Name_Main_Project')['Cultural_Practices_Hinder']
#CPH_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CPH_R1.iterrows():
CPH_R1_filter_sentence = []
CPH_R1_sentence = row["Cultural_Practices_Hinder_Reason"]
if pd.isnull(CPH_R1_sentence):
continue
CPH_R1_sentence_cleaned = re.sub(r'[^\w\s]','',CPH_R1_sentence)
CPH_R1_words = nltk.word_tokenize(CPH_R1_sentence_cleaned)
CPH_R1_words = [lemmatizer.lemmatize(w) for w in CPH_R1_words if w.lower() not in stop_words]
CPH_R1_filter_sentence.extend(CPH_R1_words)
print(CPH_R1_filter_sentence)
['need', 'harvest', 'tree', 'fully', 'grown', 'applies', 'biodiversity'] ['crop', 'rotation', 'made', 'land', 'cultivating', 'ensure', 'fertility', 'soil'] ['allowed', 'cultivate', 'area', '3', 'year', 'shifting', 'another', 'area'] ['rule', 'chief', 'ensure', 'harvesting', 'period', 'followed', 'well', 'period', 'burning', 'bush'] ['norm', 'bush', 'burned', 'dry', 'bush', 'completely', 'burn', 'also', 'support', 'quick', 'regermination', 'vegetation', 'tree'] ['nothing'] ['never', 'heard', 'learnt'] ['cutting', 'tree', 'along', 'river', 'allowed'] ['act', 'late', 'burning', 'destroys', 'environment', 'people', 'community', 'would', 'want', 'catch', 'Catapilars', 'consumption', 'sale'] ['long', 'time', 'ago', 'people', 'poor', 'management', 'natural', 'moment', 'practice', 'maintain', 'biodiversity'] ['cultural', 'practice', 'future', 'generation', 'know', 'anything'] ['customary', 'practice', 'promote', 'early', 'burning', 'people', 'tend', 'burn', 'late'] ['unregulated', 'allocation', 'parcel', 'land', 'forest'] ['Ba', 'chipupila', 'customary', 'practice', 'protecting', 'natural', 'resource'] ['place', 'grave', 'yard', 'protected', 'deforestation'] ['land', 'natural', 'resource', 'located', 'owned', 'traditional', 'authority'] ['Chitemene', 'system', 'destroys', 'environment'] ['experienced', 'cultural', 'practice', 'interfering', 'forest', 'management'] ['old', 'parent', 'taught', 'u', 'protect', 'environment', 'like', 'customary', 'practice'] ['rule', 'made', 'customary', 'practice', 'followed'] ['place', 'allow', 'cutting', 'tree', 'well', 'cutting', 'fruit', 'bearing', 'tree'] ['protection', 'environment', 'resource', 'depleted'] ['cutting', 'tree', 'anyhow'] ['teach', 'u', 'protect', 'environment', 'preventing', 'Chitemene', 'system'] ['always', 'teach', 'people', 'community', 'protect', 'environment'] ['normally', 'give', 'rule', 'protect', 'environment'] ['traditional', 'method', 'harvesting', 'poaching', 'burning', 'cutting', 'tree'] ['people', 'cut', 'tree', 'thus', 'leading', 'wild', 'animal', 'lacking', 'sleep', 'shelter'] ['practice', 'called', 'Malende', 'protect', 'certain', 'area', 'prohibit', 'tree', 'cut'] ['Hynas', 'eat', 'livestock', 'kill', 'conflict', 'ZAWA', 'Officers'] ['ownership', 'land', 'Chief', 'river', 'give', 'power', 'destroy', 'area', 'giving', 'cultivation', 'activity'] ['cultural', 'practice', 'prevent', 'rain', 'falling', 'Malende', 'disturbed'] ['bad', 'fishing', 'method', 'well', 'people', 'settling', 'game', 'park'] ['lack', 'support', 'WDCS', 'CRB', 'local', 'community', 'protect', 'environment', 'license', 'given', 'investor', 'cut', 'tree', 'community', 'benefit', 'sale', 'tree', 'tradition', 'authority', 'benefit'] ['people', 'still', 'want', 'lead', 'life', 'hunting', 'use', 'mosquito', 'net', 'catching', 'fish'] ['traditional', 'leader', 'prohibit', 'people', 'making', 'decision', 'sell', 'land', 'people', 'lead', 'destruction', 'environment', 'even', 'people', 'agree', 'headman', 'headman', 'say', 'land', 'sell', 'want'] ['cultural', 'activity', 'getting', 'root', 'tree', 'medicine', 'destroy', 'tree'] ['culture', 'someone', 'live', 'well', 'need', 'cultivate', 'huge', 'parcel', 'land', 'thus', 'leading', 'cutting', 'tree'] ['Chiefs', 'asking', 'people', 'stop', 'living', 'along', 'river', 'bank', 'using', 'mosquito', 'net', 'fish', 'community', 'adhearing']
CPH_R1["Cultural_Practices_Hinder_Reason"] = CPH_R1["Cultural_Practices_Hinder_Reason"].fillna("")
CPH_R1["Cultural_Practices_Hinder_Reason"] = CPH_R1["Cultural_Practices_Hinder_Reason"].astype(str)
CPH_R1_Text = " ".join(CPH_R1["Cultural_Practices_Hinder_Reason"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CPH_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Cultural Practices Hinder", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
agreement_levels = ["Disagree_Likert", "Strongly_Disagree_Likert"]
CPH_R2 = df2[df2["Cultural_Practices_Hinder"].isin(agreement_levels)]
CPH_R3 = CPH_R2.drop(CPH_R2.columns[[0,1,2,3,4,6,7,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CPH_R3grouped = CPH_R3.groupby('Name_Main_Project')['Cultural_Practices_Hinder']
#CPH_R3
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CPH_R3.iterrows():
CPH_R3_filter_sentence = []
CPH_R3_sentence = row["Cultural_Practices_Hinder_Reason"]
if pd.isnull(CPH_R3_sentence):
continue
CPH_R3_sentence_cleaned = re.sub(r'[^\w\s]','',CPH_R3_sentence)
CPH_R3_words = nltk.word_tokenize(CPH_R3_sentence_cleaned)
CPH_R3_words = [lemmatizer.lemmatize(w) for w in CPH_R3_words if w.lower() not in stop_words]
CPH_R3_filter_sentence.extend(CPH_R3_words)
print(CPH_R3_filter_sentence)
['never', 'happed', 'life', 'time'] ['customary', 'practice', 'hinder', 'sustainable', 'management'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['cultural', 'history', 'within', 'community', 'except', 'governmnet'] ['cultural', 'norm'] ['nothing'] ['nothing'] ['nothing'] ['customary', 'practice', 'protect', 'environment', 'like', 'way', 'ZAWA', 'protects', 'biodiversity'] ['chitemene', 'system', 'customary', 'practice', 'destroys', 'environment'] ['cultural', 'practice', 'burn', 'bush', 'certain', 'period', 'harvesting'] ['cultural', 'practice'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothinhg'] ['traditional', 'practice', 'government', 'guide', 'u'] ['nothing'] ['nothing'] ['nothing', 'SES', 'looked', 'game', 'park', 'officer', 'officer', 'mandated'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['traditional', 'way', 'management', 'stopped', 'government', 'taken', 'management', 'forest', 'biodiversity'] ['nothing'] ['long', 'time', 'malende', 'used', 'protect', 'tree', 'moment', 'taking', 'place'] ['nothing'] ['nothing'] ['nothing'] ['heard', 'anything', 'like'] ['nothing', 'know', 'hinder'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['thing'] ['nothing'] ['nothing'] ['nothing'] ['Thee', 'nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['hill', 'called', 'Omba', 'release', 'smoke', 'indicating', 'particular', 'year', 'would', 'rainfall', 'evergthing', 'would', 'fine', 'regard', 'rainfall'] ['following', 'law'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['nothing']
CPH_R3["Cultural_Practices_Hinder_Reason"] = CPH_R3["Cultural_Practices_Hinder_Reason"].fillna("")
CPH_R3["Cultural_Practices_Hinder_Reason"] = CPH_R3["Cultural_Practices_Hinder_Reason"].astype(str)
CPH_R3_Text = " ".join(CPH_R3["Cultural_Practices_Hinder_Reason"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CPH_R3_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Cultural Practices Hinder", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
11.2 Main Project and Cultural Practices Changed¶
CPC_grouped = df2.groupby('Name_Main_Project')['Cultural_Practices_Changed'].value_counts(dropna=False)
CPC_grouped
Name_Main_Project Cultural_Practices_Changed
EbA_CENTRAL_MUCHINGA_LUAPULA Strongly_Agree_Likert 1
Strongly_Disagree_Likert 1
Undecided_Likert 1
Ecosystem Conservation_NORTH_WESTERN NaN 5
Strongly_Agree_Likert 4
Strongly_Disagree_Likert 2
PIN_WESTERN Strongly_Disagree_Likert 4
Strongly_Agree_Likert 1
NaN 1
SCRALA_SOUTHERN_WESTERN_NORTHEN Agree_Likert 11
Disagree_Likert 7
Strongly_Disagree_Likert 4
Undecided_Likert 4
Strongly_Agree_Likert 3
NaN 1
SCReBS_WESTERN Strongly_Disagree_Likert 6
Strongly_Agree_Likert 5
Agree_Likert 4
SCRiKA_LS Agree_Likert 11
Strongly_Agree_Likert 10
NaN 10
Strongly_Disagree_Likert 6
Disagree_Likert 5
Undecided_Likert 2
TRALARD_LNM Strongly_Disagree_Likert 18
Disagree_Likert 10
Strongly_Agree_Likert 5
Agree_Likert 3
Undecided_Likert 2
NaN 2
Name: count, dtype: int64
CPC_grouped1 = pd.DataFrame(CPC_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = CPC_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Practices_Changed", hue_order=hue_order, legend=True)
ax.set_title("Figure 14: Number of Responses on Willingnes to Change Cultural Practices for Sustainable Management of SES in the Main Projects", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.3 Main project and Cultural Aspects Considered¶
CAC_grouped = df2.groupby('Name_Main_Project')['Cultural_Aspects_Considered'].value_counts(dropna=False)
CAC_grouped1 = pd.DataFrame(CAC_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = CAC_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Aspects_Considered", hue_order=hue_order, legend=True)
ax.set_title("Figure 15: Number of Responses on if Cultural Aspects are Considered for Sustainable Management of SES in the Main Projects", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
agreement_levels = ["Agree_Likert", "Strongly_Agree_Likert"]
CAC_R = df2[df2["Cultural_Aspects_Considered"].isin(agreement_levels)]
CAC_R1 = CAC_R.drop(CPH_R.columns[[0,1,2,3,4,6,7,8,9,10,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CAC_R1grouped = CAC_R1.groupby('Name_Main_Project')['Cultural_Aspects_Considered']
#CAC_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CAC_R1.iterrows():
CAC_R1_filter_sentence = []
CAC_R1_sentence = row["Cultural_Aspects_Considered_Reasons"]
if pd.isnull(CAC_R1_sentence):
continue
CAC_R1_sentence_cleaned = re.sub(r'[^\w\s]','',CAC_R1_sentence)
CAC_R1_words = nltk.word_tokenize(CAC_R1_sentence_cleaned)
CAC_R1_words = [lemmatizer.lemmatize(w) for w in CAC_R1_words if w.lower() not in stop_words]
CAC_R1_filter_sentence.extend(CAC_R1_words)
print(CAC_R1_filter_sentence)
['need', 'power', 'formulated', 'ownership'] ['time', 'getting', 'land', 'project', 'come', 'regulation'] ['rule', 'along', 'river', 'bank', 'tree', 'need', 'cut'] ['sometimes', 'give', 'Chiefs', 'harvest', 'collect', 'produce', 'accounted'] ['nothing', 'thing', 'ended', 'long', 'time', 'ago', 'Chiefs', 'would', 'talk', 'spirit', 'protect', 'land'] ['included'] ['customary', 'practice', 'natural', 'resource', 'get', 'destroyed'] ['answer'] ['answer'] ['done', 'product', 'project', 'work', 'well'] ['follow', 'customary', 'practice', 'people', 'burn', 'late', 'inorder'] ['livelihood', 'improve'] ['follow', 'historical', 'practice', 'forefather'] ['cultural', 'practice', 'spririal', 'rite', 'done', 'traditional', 'authority'] ['accessing', 'land', 'traditional', 'authority', 'also', 'give', 'instruction', 'put', 'livelihood', 'close', 'water', 'source', 'resource'] ['project', 'operate', 'without', 'consulting', 'traditional', 'authority'] ['answer'] ['Sometimes', 'discus', 'locally', 'animal', 'going', 'looked'] ['always', 'follow', 'law', 'taught', 'u'] ['destroy', 'environment', 'customary', 'practice'] ['changed', 'would', 'like', 'environment', 'protected', 'generation', 'see', 'future'] ['Previously', 'people', 'used', 'cut', 'tree', 'anyhow', 'change'] ['cultural', 'aspect', 'considered', 'within', 'CFMG'] ['Even', 'Bible', 'say', 'forget', 'root'] ['always', 'follow', 'told'] ['consider', 'project', 'would', 'work', 'well', 'instance', 'Livingstone', 'community', 'destroyed', 'cultural', 'activity', 'northwestern', 'came', 'different', 'region'] ['Borehole', 'sank', 'near', 'grave', 'yard', 'funeral', 'community', 'hold', 'meeting', 'village'] ['group', 'mission', 'like', 'group', 'give', 'money', 'orphan', 'profit', 'make'] [] [] ['answer'] ['considered', 'thatched', 'roof', 'using', 'pole', 'cattle', 'craw', 'cultural', 'aspect', 'difficult', 'change', 'unless', 'people', 'enough', 'money'] ['project', 'bettering', 'life'] ['taught'] ['traditional', 'leader', 'accept', 'certain', 'project', 'done', 'specific', 'place'] ['included', 'induna'] ['indunas', 'send', 'representative', 'learn', 'accept', 'project', 'community']
CAC_R1["Cultural_Aspects_Considered_Reasons"] = CAC_R1["Cultural_Aspects_Considered_Reasons"].fillna("")
CAC_R1["Cultural_Aspects_Considered_Reasons"] = CAC_R1["Cultural_Aspects_Considered_Reasons"].astype(str)
CAC_R1_Text = " ".join(CAC_R1["Cultural_Aspects_Considered_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CAC_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Cultural Aspects Considered", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
agreement_levels = ["Disagree_Likert", "Strongly_Disagree_Likert"]
CAC_R2 = df2[df2["Cultural_Aspects_Considered"].isin(agreement_levels)]
CAC_R3 = CAC_R2.drop(CPH_R2.columns[[0,1,2,3,4,6,7,8,9,10,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CAC_R3grouped = CAC_R3.groupby('Name_Main_Project')['Cultural_Aspects_Considered']
#CAC_R3
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CAC_R3.iterrows():
CAC_R3_filter_sentence = []
CAC_R3_sentence = row["Cultural_Aspects_Considered_Reasons"]
if pd.isnull(CAC_R3_sentence):
continue
CAC_R3_sentence_cleaned = re.sub(r'[^\w\s]','',CAC_R3_sentence)
CAC_R3_words = nltk.word_tokenize(CAC_R3_sentence_cleaned)
CAC_R3_words = [lemmatizer.lemmatize(w) for w in CAC_R3_words if w.lower() not in stop_words]
CAC_R3_filter_sentence.extend(CAC_R3_words)
print(CAC_R3_filter_sentence)
['cultural', 'history', 'area'] ['cultural', 'aspect'] ['follow', 'chief', 'government', 'say'] ['nothing'] ['nothing'] ['nothing'] ['nothing'] ['plan', 'given', 'u', 'TRALARD', 'consideration', 'cultural', 'history', 'formulating', 'project'] ['nothing'] ['moment', 'follow', 'law', 'government', 'cultural', 'norm'] ['nothing'] ['ask', 'headman', 'give', 'u', 'place', 'keep', 'goat', 'place', 'acceptable', 'traditional', 'law'] ['nothing', 'moment', 'traditional', 'leader', 'spritual', 'activity', 'project'] ['include', 'cultural', 'practice'] ['project', 'move', 'well', 'headman', 'project', 'otherwise', 'included', 'might', 'problem'] ['available'] ['consider'] ['nothing'] ['nothing'] [] ['nothing'] [] ['unneccesary'] ['anything'] [] ['nothing'] ['include'] [] ['nothing'] ['include', 'aspect'] ['nothing'] ['nothing', 'included'] ['Nothing', 'considered'] ['nothing'] [] ['Nothing', 'considered'] ['nothing'] ['included'] ['include'] ['nothing', 'included'] ['nothing'] ['included'] ['Nothing'] ['Nothing'] ['nothing'] ['control', 'Chief', 'make', 'change'] ['nothing', 'included'] ['taken', 'consideration'] ['project', 'destroyed', 'culture'] ['nothing'] [] ['nothing'] ['project', 'associated', 'cultural', 'aspect'] ['Thee', 'nothing'] ['nothing'] [] ['considered'] ['nothing'] ['nothing'] ['nothing'] ['project', 'come', 'BRE'] ['part', 'project'] [] [] ['Nothing'] [] ['nothing'] ['follow', 'teaching'] ['cultural', 'aspect', 'included'] ['cultural', 'aspect', 'included'] [] [] ['included'] ['include'] ['nothing'] ['nothing'] []
CAC_R3["Cultural_Aspects_Considered_Reasons"] = CAC_R3["Cultural_Aspects_Considered_Reasons"].fillna("")
CAC_R3["Cultural_Aspects_Considered_Reasons"] = CAC_R3["Cultural_Aspects_Considered_Reasons"].astype(str)
CAC_R3_Text = " ".join(CAC_R3["Cultural_Aspects_Considered_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CAC_R3_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Cultural Aspects Considered", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
11.4 Main Project and Measure of Indicators¶
MI_grouped = df2.groupby('Name_Main_Project')['Measure_Indicators'].value_counts(dropna=False)
MI_grouped1 = pd.DataFrame(MI_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = MI_grouped1, x="count", y="Name_Main_Project", hue="Measure_Indicators", hue_order=hue_order, legend=True)
ax.set_title("Figure 16: Number of Responses on if Measurement of Indicators is important for Sustainable Management of SES in the Main Projects", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.5 Main Project and Livelihood Dependent¶
LD_grouped = df2.groupby('Name_Main_Project')['Livilihood_Depenedent'].value_counts(dropna=False)
LD_grouped1 = pd.DataFrame(LD_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = LD_grouped1, x="count", y="Name_Main_Project", hue="Livilihood_Depenedent", hue_order=hue_order, legend=True)
ax.set_title("Figure 17: Number of Responses in each of the main Project on if Livelihoods Depend on SES", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.6 Main Project and Livelihood Changed¶
CL_grouped = df2.groupby('Name_Main_Project')['Change_Livelihood'].value_counts(dropna=False)
CL_grouped1 = pd.DataFrame(CL_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = CL_grouped1, x="count", y="Name_Main_Project", hue="Change_Livelihood", hue_order=hue_order, legend=True)
ax.set_title("Figure 18: Number of Responses in each of the main Project on if Livelihoods can be Changed", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.7 Main Project and Livelihood Changing Easy¶
CLE_grouped = df2.groupby('Name_Main_Project')['Change_Livelihood_Easy'].value_counts(dropna=False)
CLE_grouped1 = pd.DataFrame(CLE_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = CLE_grouped1, x="count", y="Name_Main_Project", hue="Change_Livelihood_Easy", hue_order=hue_order, legend=True)
ax.set_title("Figure 19: Number of Responses in each of the main Project on if Livelihoods can be Changed Easily", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
agreement_levels = ["Agree_Likert", "Strongly_Agree_Likert"]
CLE_R = df2[df2["Change_Livelihood_Easy"].isin(agreement_levels)]
CLE_R1 = CLE_R.drop(CLE_R.columns[[0,1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CLE_R1grouped = CLE_R1.groupby('Name_Main_Project')['Change_Livelihood_Easy']
#CLE_R1
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CLE_R1.iterrows():
CLE_R1_filter_sentence = []
CLE_R1_sentence = row["Change_Livelihood_Easy_Reasons"]
if pd.isnull(CLE_R1_sentence):
continue
CLE_R1_sentence_cleaned = re.sub(r'[^\w\s]','',CLE_R1_sentence)
CLE_R1_words = nltk.word_tokenize(CLE_R1_sentence_cleaned)
CLE_R1_words = [lemmatizer.lemmatize(w) for w in CLE_R1_words if w.lower() not in stop_words]
CLE_R1_filter_sentence.extend(CLE_R1_words)
print(CLE_R1_filter_sentence)
['SES', 'taken', 'care', 'increase', 'number', 'bring', 'income'] ['SeS', 'depleted', 'thus', 'need', 'change', 'agriculture', 'like', 'goat', 'rearing'] ['Change', 'easier', 'one', 'decides', 'change'] ['thing', 'like', 'climate', 'change', 'affecting', 'u', 'thus', 'thought', 'change'] ['new', 'improvement', 'like', 'cooking', 'stove', 'change', 'way', 'livelihood'] ['destruction', 'SES'] ['need', 'change', 'cutting', 'activity'] ['need', 'change', 'activity', 'destroy', 'environment'] ['changed', 'learning'] ['kept', 'well', 'project', 'like', 'chicken', 'goat', 'would', 'help', 'depend', 'natural', 'resource'] ['fish', 'pond', 'project'] ['also', 'improve', 'livelihood', 'future'] ['long', 'money', 'livelihood'] ['want', 'venture', 'farming', 'reason', 'shifted', 'place', 'located'] ['sub', 'project', 'bee', 'keeping', 'disturbed', 'late', 'burning', 'early', 'burning', 'disturb', 'fire', 'much'] ['need', 'change', 'instance', 'depend', 'water', 'Lulimala', 'river', 'dry', 'get', 'water'] ['natural', 'resource', 'protected', 'accessed'] ['use', 'according', 'accepted', 'norm'] ['Yes', 'normally', 'use', 'natural', 'resource', 'somehow'] ['environment', 'protected', 'business'] ['easier', 'change', 'experience', 'natural', 'resource', 'change', 'accordance', 'climate', 'change'] ['change', 'slowly', 'due', 'low', 'performance', 'group'] ['depend'] ['would', 'difficult', 'time', 'would', 'change', 'gradually'] ['need', 'change', 'world', 'developing', 'hence', 'move'] ['keeping', 'goat', 'help', 'protecting', 'environment', 'livelihood', 'depend'] ['source', 'income'] ['government', 'support', 'u', 'turn', 'protect', 'environment'] ['people', 'teaching', 'new', 'thing', 'learn'] ['thing', 'difficult', 'look', 'instance', 'honey', 'bee', 'mushroom', 'may', 'difficult', 'find'] ['protected', 'resource', 'change'] ['long', 'help', 'government'] ['farming'] ['project', 'help', 'u', 'improve', 'livelihood'] ['change', 'way', 'depending', 'cutting', 'tree', 'focus', 'producing', 'honey', 'livelihood'] ['knowledge', 'make', 'u', 'act', 'certain', 'way', 'currently', 'future', 'might', 'new', 'knowledge', 'would', 'make', 'u', 'act', 'different', 'way', 'initial', 'one'] ['earning', 'natural', 'resource', 'develop'] ['change', 'easier', 'depent', 'people', 'easily', 'changed'] ['livelihood', 'plan', 'small', 'livestock', 'CFMG', 'well', 'garden'] ['source', 'earning', 'living'] ['management', 'forest', 'capacity', 'buildiing'] ['climate', 'change', 'force', 'change', 'livelihood'] ['depend', 'domecticated', 'animal'] ['long', 'support', 'somewhere'] ['resource', 'change', 'livelihood'] ['long', 'depend', 'livelihood'] ['enough', 'water', 'inland', 'depeneding', 'river', 'shore', 'cultivation', 'purpose', 'livelihood', 'would', 'change'] ['Thing', 'evolving', 'thus', 'stagnant'] ['long', 'helped', 'altrenative', 'livelihood'] ['trying', 'shift', 'make', 'garden', 'livelihood'] ['way', 'thing', 'climate', 'change', 'call', 'change'] ['long', 'different', 'alternative', 'livelihood'] ['Yes', 'depending', 'catle', 'looking', 'thus', 'easy', 'change'] ['profit', 'one', 'livelihood', 'change', 'another', 'type', 'livelihood'] ['easy', 'important', 'change', 'climate', 'change', 'change', 'without', 'taking', 'alternative'] ['livelihood', 'depends', 'farming', 'main', 'activity'] ['Changing', 'difficult', 'get', 'used', 'fine'] ['always', 'depending', 'farming', 'thus', 'change', 'would', 'lead', 'u', 'access', 'currently'] ['always', 'focused', 'agriculture', 'thus', 'little', 'bit', 'difficult', 'change'] ['answer'] ['would', 'prefer', 'shift', 'gardening', 'activity'] ['change', 'lead', 'better', 'life', 'easy'] ['guideline', 'assist', 'changing', 'livelihood'] ['know', 'livelihood', 'would', 'change', 'might', 'worse', 'current', 'one'] ['changed', 'dependency', 'climate', 'change'] ['long', 'advantage', 'disadvantage', 'well', 'technology', 'allow', 'done'] ['depend', 'forest', 'wetland'] ['changing', 'better', 'one'] ['Life', 'hard', 'due', 'high', 'cost', 'commodity'] ['problem', 'boreholes'] ['easy', 'good', 'thing'] ['way', 'live', 'adapt', 'environment', 'like', 'climate', 'change', 'thus', 'adapted', 'challenge'] ['find', 'someone', 'assist', 'u', 'change'] ['knowledge', 'use', 'thing', 'given', 'u', 'government', 'live', 'good', 'life'] ['law', 'ask', 'u', 'change'] ['long', 'help', 'somewhere'] ['easier', 'long', 'time', 'process'] ['long', 'use', 'change'] ['limited', 'responsibility', 'thus', 'easy', 'change'] ['period', 'climate', 'change', 'call', 'different', 'way', 'thing'] ['empowered', 'easy'] ['use', 'knowledge', 'adquately', 'easier'] ['long', 'need', 'change', 'well', 'climate', 'change'] ['easier', 'long', 'commitment'] ['person', 'make', 'decision', 'looking', 'back', 'done', 'make', 'corrective', 'measure'] ['long', 'committed'] ['difficult', 'long', 'follow', 'taught', 'change'] ['long', 'capacity']
CLE_R1["Change_Livelihood_Easy_Reasons"] = CLE_R1["Change_Livelihood_Easy_Reasons"].fillna("")
CLE_R1["Change_Livelihood_Easy_Reasons"] = CLE_R1["Change_Livelihood_Easy_Reasons"].astype(str)
CLE_R1_Text = " ".join(CLE_R1["Change_Livelihood_Easy_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CLE_R1_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Change Livelihood Easy", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
agreement_levels = ["Disagree_Likert", "Strongly_Disagree_Likert"]
CLE_R2 = df2[df2["Change_Livelihood_Easy"].isin(agreement_levels)]
CLE_R3 = CLE_R2.drop(CLE_R2.columns[[0,1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19,22,23,24,25,26,27,28,29,30,31,32,33,34]], axis = 1)
CLE_R3grouped = CLE_R3.groupby('Name_Main_Project')['Change_Livelihood_Easy']
#CLE_R3
lemmatizer=WordNetLemmatizer()
stop_words = set(stopwords.words('english'))
for index, row in CLE_R3.iterrows():
CLE_R3_filter_sentence = []
CLE_R3_sentence = row["Change_Livelihood_Easy_Reasons"]
if pd.isnull(CLE_R3_sentence):
continue
CLE_R3_sentence_cleaned = re.sub(r'[^\w\s]','',CLE_R3_sentence)
CLE_R3_words = nltk.word_tokenize(CLE_R3_sentence_cleaned)
CLE_R3_words = [lemmatizer.lemmatize(w) for w in CLE_R3_words if w.lower() not in stop_words]
CLE_R3_filter_sentence.extend(CLE_R3_words)
print(CLE_R3_filter_sentence)
['livelihood', 'would', 'improve'] ['everything', 'use', 'come', 'natural', 'resource', 'like', 'tree', 'building', 'animal', 'protein'] ['source', 'income'] ['assist', 'adequate', 'water', 'tree', 'cut', 'well', 'future', 'general', 'see', 'natural', 'resource'] ['world', 'becoming', 'mordenised', 'thus', 'need', 'adapt', 'current', 'status'] ['Thats', 'income', 'come'] ['depend', 'natural', 'resource', 'thus', 'changing', 'easy'] ['use', 'natural', 'resource'] ['depend', 'agriculture', 'natural', 'resource', 'like', 'forest'] ['mostly', 'use', 'goat', 'pig', 'livelihood'] ['livelihood', 'engage', 'keeping', 'goat', 'fish', 'farming'] ['livelihood', 'like', 'keeping', 'goat', 'chicken', 'gardening', 'hiring', 'wedding', 'dress'] ['use', 'domesticated', 'animal'] ['use'] ['thing', 'learning', 'already'] ['money', 'protection', 'environment', 'yet', 'money', 'carbon', 'trade', 'waiting', 'long', 'time'] ['used'] ['changing', 'another', 'lifestyle', 'mean', 'starting', 'new', 'life'] ['currently', 'drought', 'thus', 'difficult', 'change'] ['natural', 'resource', 'given', 'u', 'God', 'depend', 'root', 'tree', 'medicine', 'fruit'] ['one', 'activity', 'farming', 'thus', 'changing', 'difficult'] ['livelihood', 'based', 'farming'] ['always', 'use', 'product', 'forest'] ['livelihood', 'difficult', 'moment', 'climate', 'change'] ['problem', 'thus', 'change'] ['person', 'change', 'unless', 'person', 'shown'] ['manage', 'live', 'without', 'depending', 'forest', 'like', 'craw', 'use', 'tree'] ['starting', 'point', 'difficult', 'change', 'need', 'finance'] ['livelihood', 'shifting'] ['livelihood'] ['sure', 'future', 'thus', 'cannaot', 'change'] ['challenge'] ['firewood', 'depend', 'tree', 'thus', 'challenge', 'change', 'Agriculture', 'mean', 'cutting', 'tree', 'building', 'house', 'depends', 'tree', 'Also', 'piggery', 'project', 'done', 'market', 'thus', 'project', 'effective'] ['difficult', 'money', 'pig', 'keeping', 'challenge', 'looking', 'die'] ['person', 'leading', 'better', 'life', 'change'] ['old', 'age', 'thus', 'change', 'livelihood'] ['continue', 'trying']
CLE_R3["Change_Livelihood_Easy_Reasons"] = CLE_R3["Change_Livelihood_Easy_Reasons"].fillna("")
CLE_R3["Change_Livelihood_Easy_Reasons"] = CLE_R3["Change_Livelihood_Easy_Reasons"].astype(str)
CLE_R3_Text = " ".join(CLE_R1["Change_Livelihood_Easy_Reasons"])
wordcloud = WordCloud(background_color = "white", width = 1000, height = 400).generate(CLE_R3_Text)
plt.figure(figsize=(20, 10))
plt.imshow(wordcloud, interpolation="bilinear")
plt.title("Figure 20: Change Livelihood Easy", loc="left", fontsize=20, pad=20)
plt.axis("off")
plt.show()
11.8 Main Project and Ecosystem Service Reduction¶
ESR_grouped = df2.groupby('Name_Main_Project')['Ecosystem_Services_Reduced'].value_counts(dropna=False)
ESR_grouped1 = pd.DataFrame(ESR_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = ESR_grouped1, x="count", y="Name_Main_Project", hue="Ecosystem_Services_Reduced", hue_order=hue_order, legend=True)
ax.set_title("Figure 20: Number of Responses in each of the main Project on if Ecosystem services have Reduced", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.9 Main project and Deforestation Increase¶
DI_grouped = df2.groupby('Name_Main_Project')['Deforestaion_Increased'].value_counts(dropna=False)
DI_grouped1 = pd.DataFrame(DI_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = DI_grouped1, x="count", y="Name_Main_Project", hue="Deforestaion_Increased", hue_order=hue_order, legend=True)
ax.set_title("Figure 21: Number of Responses in each of the main Project on if Deforestation has Increased", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.10 Main project and Protected Areas as a Hinderance¶
PAHL_grouped = df2.groupby('Name_Main_Project')['Protected_Areas_Hinderarnce_Livelihood'].value_counts(dropna=False)
PAHL_grouped1 = pd.DataFrame(PAHL_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = PAHL_grouped1, x="count", y="Name_Main_Project", hue="Protected_Areas_Hinderarnce_Livelihood", hue_order=hue_order, legend=True)
ax.set_title("Figure 22: Number of Responses in each of the main Project on if Protected Areas are a Hinderance to Livelihood", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.11 Main Project and New Livelihood Projects¶
NLP_grouped = df2.groupby('Name_Main_Project')['New_Livelihood_Projects'].value_counts(dropna=False)
NLP_grouped1 = pd.DataFrame(NLP_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = NLP_grouped1, x="count", y="Name_Main_Project", hue="New_Livelihood_Projects", hue_order=hue_order, legend=True)
ax.set_title("Figure 23: Number of Responses in each of the main Project on if some Livelihood Subprojects not Implemented", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
11.12 Main Project and Sustainability of Subprojects¶
SSC_grouped = df2.groupby('Name_Main_Project')['Subprojects_Sustainability_Contribution'].value_counts(dropna=False)
SSC_grouped1 = pd.DataFrame(SSC_grouped)
plt.figure(figsize=(8.7, 8.27))
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax = sns.barplot(data = SSC_grouped1, x="count", y="Name_Main_Project", hue="Subprojects_Sustainability_Contribution", hue_order=hue_order, legend=True)
ax.set_title("Figure 24: Number of Responses in each of the main Project on if Subprojects Contribute to Sustainability", fontsize=14)
plt.legend(title="KEY")
for container in ax.containers:
ax.bar_label(container, fmt="%.0f", label_type="edge", padding=3)
plt.show()
#plt.saveax("charts.png", dpi=300)
fig = plt.figure(figsize=(30, 60))
gs = GridSpec(6, 6, figure=fig)
ax1 = fig.add_subplot(gs[0, :3]) # Row 0, columns 0-2
ax2 = fig.add_subplot(gs[0, 3:]) # Row 0, columns 3-6
ax3 = fig.add_subplot(gs[1, :3]) # Row 1, columns 0-1
ax4 = fig.add_subplot(gs[1, 3:]) # Row 1, columns 2-4
ax5 = fig.add_subplot(gs[2, :3]) # Row 1, columns 5-6
ax6 = fig.add_subplot(gs[2, 3:]) # Row 2, columns 0-2
ax7 = fig.add_subplot(gs[3, :3]) # Row 2, columns 3-6
ax8 = fig.add_subplot(gs[3, 3:]) # Row 3, columns 0-3
ax9 = fig.add_subplot(gs[4, :3]) # Row 3, columns 4-6
ax10 = fig.add_subplot(gs[4, 3:]) # Row 4, columns 0-1
ax11 = fig.add_subplot(gs[5, :3]) # Row 4, columns 2-4
ax12 = fig.add_subplot(gs[5, 3:]) # Row 4, columns 5-6
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert", "NaN"]
ax1 = sns.barplot(data = CPH_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Practices_Hinder", hue_order=hue_order, legend=True, ax=ax1)
ax1.set_title("Figure 2: Number of Responses on cultural Practices Hindering Sustainable Management of SES in the Main Projects", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax2 = sns.barplot(data = CPC_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Practices_Changed", hue_order=hue_order, legend=True, ax=ax2)
ax2.set_title("Figure 3: Number of Responses on Willingnes to Change Cultural Practices for Sustainable Management of SES in the Main Projects", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax3 = sns.barplot(data = CAC_grouped1, x="count", y="Name_Main_Project", hue="Cultural_Aspects_Considered", hue_order=hue_order, legend=True, ax=ax3)
ax3.set_title("Figure 4: Number of Responses on if Cultural Aspects are Considered for Sustainable Management of SES ", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax4 = sns.barplot(data = MI_grouped1, x="count", y="Name_Main_Project", hue="Measure_Indicators", hue_order=hue_order, legend=True, ax=ax4)
ax4.set_title("Figure 5: Number of Responses on if Measurement of Indicators is important for Sustainable Management of SES", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax5 = sns.barplot(data = LD_grouped1, x="count", y="Name_Main_Project", hue="Livilihood_Depenedent", hue_order=hue_order, legend=True, ax=ax5)
ax5.set_title("Figure 6: Number of Responses in each of the main Project on if Livelihoods Depend on SES", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax6 = sns.barplot(data = CL_grouped1, x="count", y="Name_Main_Project", hue="Change_Livelihood", hue_order=hue_order, legend=True, ax=ax6)
ax6.set_title("Figure 7: Number of Responses in each of the main Project on if Livelihoods can be Changed", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax7 = sns.barplot(data = CLE_grouped1, x="count", y="Name_Main_Project", hue="Change_Livelihood_Easy", hue_order=hue_order, legend=True, ax=ax7)
ax7.set_title("Figure 8: Number of Responses in each of the main Project on if Livelihoods can be Changed Easily", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax8 = sns.barplot(data = ESR_grouped1, x="count", y="Name_Main_Project", hue="Ecosystem_Services_Reduced", hue_order=hue_order, legend=True, ax=ax8)
ax8.set_title("Figure 9: Number of Responses in each of the main Project on if Ecosystem services have Reduced", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax9 = sns.barplot(data = DI_grouped1, x="count", y="Name_Main_Project", hue="Deforestaion_Increased", hue_order=hue_order, legend=True, ax=ax9)
ax9.set_title("Figure 10: Number of Responses in each of the main Project on if Deforestation has Increased", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax10 = sns.barplot(data = PAHL_grouped1, x="count", y="Name_Main_Project", hue="Protected_Areas_Hinderarnce_Livelihood", hue_order=hue_order, legend=True, ax=ax10)
ax10.set_title("Figure 11: Number of Responses in each of the main Project on if Protected Areas are a Hinderance to Livelihood", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax11 = sns.barplot(data = NLP_grouped1, x="count", y="Name_Main_Project", hue="New_Livelihood_Projects", hue_order=hue_order, legend=True, ax=ax11)
ax11.set_title("Figure 12: Number of Responses in each of the main Project on if some Livelihood Subprojects not Implemented", fontsize=14)
hue_order = ["Strongly_Disagree_Likert", "Disagree_Likert", "Undecided_Likert", "Agree_Likert", "Strongly_Agree_Likert"]
ax12 = sns.barplot(data = SSC_grouped1, x="count", y="Name_Main_Project", hue="Subprojects_Sustainability_Contribution", hue_order=hue_order, legend=True, ax=ax12)
ax12.set_title("Figure 13: Number of Responses in each of the main Project on if Subprojects Contribute to Sustainability", fontsize=14)
plt.tight_layout()
plt.savefig("12charts.png", dpi=300)
plt.savefig("12graph.jpg")
plt.show()
12. Converting the Notebook¶
with open('02_Landscape_Transformation_Livelihood.ipynb', 'r') as f:
notebook = nbformat.read(f, as_version=4)
# Initialize the HTML Exporter
html_exporter = HTMLExporter()
(body, resources) = html_exporter.from_notebook_node(notebook)
# Save the HTML output
with open('02_Landscape_Transformation_Livelihood.html', 'w') as f:
f.write(body)
print("Conversion to HTML completed!")
13. Converting to Word Document¶
#!pandoc 02_Landscape_Transformation_Livelihood.md -o output.docx
!pandoc 02_Landscape_Transformation_Livelihood.html -o D:/DataAnalysis/LandscapeTransformationLivelihood.docx
UTF-8 decoding error in 02_Landscape_Transformation_Livelihood.html at byte offset 243172 (95). The input must be a UTF-8 encoded text.
#subprocess.run(["pandoc", "02_Landscape_Transformation_Livelihood.md", "-o", "02_Landscape_Transformation_Livelihood.docx"])
#subprocess.run(["pandoc", "02_Landscape_Transformation_Livelihood.md", "-o", "C:/Users/nazin/Data_Analysis/02_Landscape_Transformation.docx"])
subprocess.run(["pandoc", r"02_Landscape_Transformation_Livelihood.html", "-o", r"C:/Users/nazin/Data_Analysis/02_Landscape_Transformation.docx"], capture_output=True, text=True)
print("Conversion successful! File saved as 02_Landscape_Transformation_Livelihood.docx")
Conversion successful! File saved as 02_Landscape_Transformation_Livelihood.docx
print(shutil.which("pandoc"))
C:\Users\nazin\AppData\Local\anaconda3\envs\NLTK_Py_3_12\Scripts\pandoc.EXE